Faith and Science

It’s a bit odd how religious persons say things like “Science is for understanding the natural world, faith is for understanding the supernatural or spiritual world” as if these two methods of learning what is real are equally valid. They clearly are not.

“Science” basically means “testing.” You formulate a theory, devise a way to test it, and judge the results to see what’s true about the natural world. Now, it is true that some theories can’t be tested or haven’t been tested and are inappropriately presented as fact or likely to be fact. It’s also true that sometimes science is wrong — tests are flawed, good tests yield inaccurate results due to unexpected phenomena, or results are misjudged or misinterpreted. Yet over time, science grows more accurate. Tests are repeated over years, decades, and centuries, giving us further confidence in findings. New individuals administer such tests, weeding out biases. New tests are designed, looking at long-studied phenomena from different angles and in new ways. In these ways, 1) the ability to actually test ideas and 2) improved understanding over time, science helps us know what’s true.

Faith has neither of these things. First, it might be noted that when you hear the statement in the first paragraph, the speaker is typically talking about one faith, his or her own. Christian faith helps you understand the spiritual world, the true spiritual world. Hinduism, Scientology, Islam, and Buddhism won’t help you know the supernatural world, those are of course all false religions. In any case, we’ll assume a more open-minded stance, because some do believe that there is truth in all faiths, that all roads lead to Rome.

Where science is an ever-growing body of knowledge based on testing over time and into the modern age, faiths typically present more or less fixed bodies of ideas based on writings from comparatively primitive ancient cultures — desert tribes from the Middle East, for example, in the case of Islam, Christianity, and Judaism. Such writings describe higher powers, afterlives, the meaning of life, and so on. Ideas, supposed knowledge, of the supernatural world. Unfortunately, there is no way to test to see if any of these notions are true. The ideas could easily be man-made fictions. You may believe your Hindu or Christian or Islamic faith helps you know and understand the spiritual world, but, to put it bluntly, that spiritual world may not exist. There is no test to run to find out. (And no, Jesus being resurrected is not a “test,” nor are miracles, answered prayers, or feeling God speaking to you. The obvious problem with this poor counterargument is that these things cannot be tested for validity either. They could easily be human fictions and imaginings as well, as explained in detail elsewhere. Think about it. If someone doubts photosynthesis, you can teach him how to test to see if photosynthesis is real; there is no test you can use to show him a god or goddess is actually speaking to you, that it isn’t just in your head. “Try faith yourself, you’ll see the proof, believing is seeing” is the best a believer can say, possibly just drawing the fellow into human fictions and imaginings as well — there is no way to test to know otherwise.) This is in stark contrast to science; we can have confidence that the natural world exists, and we are able to put ideas to the test to actually see what’s true or false, or most likely to be so.

Linked with the lack of “knowledge” verification or falsifiability, of course, is simply the fact that ideas about the spiritual world cannot grow more accurate over time. Ancient scriptures aren’t typically added to. (In modern times, at least.) Texts are of course reinterpreted, gods are reimagined, ways the faithful think they should live change. For most American Christians for a long time, God was fine with the enslavement of blacks and the bible was used to justify it, without difficulty given what’s in it. Today things are quite different. Religions may change as societies do, and the faithful may feel they gain more knowledge by studying the scriptures more deeply, but no one will ever discover that we only spend 1,000 years in heaven, not eternity. Christianity won’t change when someone announces, after much research, that God has a couple wives up in heaven. Newly discovered ancient writings won’t become holy scripture. As stated, it’s all a fairly fixed set of ideas about the supernatural realm. The immutable nature of religious “knowledge” is of course celebrated by the faithful — everything we need to know about the spiritual world was written thousands of years ago, we don’t need more than what God gave us or any improved accuracy, everything’s accurate. But the supposed knowledge and its assumed accuracy are untestable and could easily be false, and there’s no process of gaining more knowledge or improving accuracy over time to really hammer out if these things are false or true. Imagine if science was like this — no way to know if germ theory is correct, no centuries spent gaining more information and developing better and better medicines. In its ability to test ideas and improve understanding as time goes on, science, unlike faith, is an actual method of learning what is real. (All this should not be surprising, given that “faith” is often defined, by believers and nonbelievers alike, as “belief without proof.”)

In sum, science is a useful tool for gaining objective knowledge about the natural world. Faith is simply hoped to be a tool for gaining what could easily be imagined knowledge about an imagined spiritual realm. These things are hardly the same.

Merit Pay

“Too many supporters of my party have resisted the idea of rewarding excellence in teaching with extra pay, even though we know it can make a difference in the classroom,” President Barack Obama said in March 2009. The statement foreshadowed the appearance of teacher merit pay in Obama’s “Race to the Top” education initiative, which grants federal funds to top performing schools. Performance, of course, is based on standardized testing, and in the flawed Race to the Top, so are teacher salaries. Teacher pay could rise and fall with student test scores.

Rhetoric concerning higher teacher salaries is a good thing. Proponents of merit pay say meager teacher salaries are an injustice, and such a pay system is needed to alleviate the nation’s teacher shortage. However, is linking pay to test scores the best way to “reward excellence”? Do we know, without question, it “can make a difference in the classroom”? The answers, respectively, are no and no. Merit pay is an inefficient and potentially counterproductive way to improve education in American public schools. It fails to motivate teachers to better themselves or remain in the profession, it encourages unhealthy teacher competition and dishonest conduct, and it does not serve well certain groups, like special education students.

Educator Alfie Kohn, author of the brilliant Punished by Rewards, wrote an article in 2003 entitled “The Folly of Merit Pay.” He writes, “No controlled scientific study has ever found a long-term enhancement of the quality of work as a result of any incentive system.” Merit pay simply does not work. It has been implemented here and there for decades, but is always abandoned. A good teacher is intrinsically motivated: he teaches because he enjoys it. She teaches because it betters society. He teaches because it is personally fulfilling. Advocates of merit pay ignore such motivation, but Kohn declares, “Researchers have demonstrated repeatedly that the use of such extrinsic inducements often reduces intrinsic motivation. The more that people are rewarded, the more they tend to lose interest in whatever they had to do to get the reward.” Extra cash sounds great, but it is destructive to the inner passions of quality teachers.

Teachers generally rank salaries below too much standardization and unfavorable accountability on their lists of grievances (Kohn, 2003). Educators leave the profession because they are being choked by federal standards and control, and politicians believe linking pay to such problems is a viable solution? Professionals also generally oppose merit pay, disliking its competitive nature. Professor and historian Diane Ravitch writes an incentive “gets everyone thinking about what is good for himself or herself and leads to forgetting about the goals of the organization. It incentivizes short-term thinking and discourages long-term thinking” (Strauss, 2011). Teaching students should not be a game, with big prizes for the winners.

Further, at issue is the distorted view of students performance pay perpetuates. Bill Raabe of the National Education Association says, “We all must be wary of any system that creates a climate where students are viewed as part of the pay equation, rather than young people who deserve a high quality education” (Rosales, 2009). In the current environment of high-stakes tests (which do not really evaluate the quality of teaching at all), merit pay is just another way to encourage educators to “teach to the test,” or worse: cheating. The nation has already seen public school teachers under so much pressure they resort to modifying their students’ scores in order to save their salaries or their jobs.

It is clear that merit pay does not serve young learners, but this is especially true in the case of special education students. The Individuals with Disabilities Education Act (IDEA) requires states that accept federal funding to provide individual educational services to all children with disabilities. While the preeminence of “inclusion” of SPED children in regular classrooms is appropriate, the students are also included in the accountability statues of No Child Left Behind. SPED students are required to meet “adequate yearly progress” (AYP) standards based on high-stakes tests in reading, math, and science, like other students. While some youths with “significant cognitive disabilities” (undefined by federal law) can take alternate assessments, there is a cap on how many students can do so (Yell, Katsiyannas, & Shiner, 2006, p. 35-36). Most special education students must be included in standardized tests.

The abilities and the needs of special education students are too diverse to be put in the box that is a standardized test. SPED students are essentially being asked to perform at their chronological grade level, and for some students that is simply not possible. How does that fit in with a Free Appropriate Public Education, the education program the IDEA guarantees, that focuses on “individualized” plans for the “unique needs” of the student? It does not. Progress is individual, not standardized. Further, linking teacher pay to this unreasonable accountability only makes matters worse. Performance pay will likely punish special education instructors. Each year, SPED students may make steady progress (be it academic, cognitive, social, emotional, etc.), but teachers will see their salaries stagnate or slashed because such gains do not meet federal or state benchmarks. Such an uphill battle will discourage men and women from entering the special education field, meaning fewer quality instructors to serve students with disabilities.

When a school defines the quality of teaching by how well students perform on one test once a year, everyone loses. When pay is in the equation, it’s worse. Obama deserves credit for beginning to phase out NCLB, but merit pay is no way to make public schools more effective. If politicians want to pay good teachers better and weed out poor teachers, their efforts would be better directed at raising salaries across the board and reforming tenure.

 

References

Kohn, A. (2003). The Folly of Merit Pay. Retrieved February 19, 2012 from https://www.alfiekohn.org/article/folly-merit-pay/.

Rosales, J. (2009). Pay Based on Test Scores? Retrieved February 19, 2012 from http://www.nea.org/home/36780.html.

Strauss, V. (2011). Ravitch: Why Merit Pay for Teachers Doesn’t Work. Retrieved February 19, 2012 from http://www.washingtonpost.com/blogs/answer-sheet/post/ravitch-why-merit-pay-for-teachers-doesnt-work/2011/03/29/AFn5w9yB_blog.html.

Yell, M. L., Katsiyannas, A., Shiner, J. G. (2006). The No Child Left Behind Act, Adequate Yearly Progress, and Students with Disabilities. Teaching Exceptional Children, 38 (4), 32-39.

Should We Talk About How Trauma Affects Police Behavior?

In the discussion of police brutality, generally speaking, one camp calls for sweeping, radical, even terminal changes to policing in order to end beatings and killings of civilians, while the other camp stresses that police officers have extremely dangerous, high-stress jobs and, while mistakes do occur at times, certain changes will only make things more dangerous for cops and for the public at large. There’s some talking past each other here, but perhaps one of the more significant things that is missed or simply isn’t much discussed is how these ideas are connected: of course people who go through trauma might be more likely to snap and murder someone for no reason at all.

A couple clarifications here. First, many on the Left will have little sympathy for the police no matter how traumatized someone might be by seeing dead bodies, blood and brains splattered about, raped children, and beaten wives, or by being shot at or otherwise attacked. After all, individuals who join police forces do so by choice, participate (whether aware of it or not) in an oppressive system that ensures the constant harassment and mistreatment of people of color, and so on. For some of my comrades, talking about how officer trauma might contribute to police brutality would be a major faux pas, offering excuses or a sympathetic ear to the other side in a rather uncomfortable way. Yet if police trauma does exist, and if it does contribute in some way to police brutality, it makes sense to think about it, discuss it, and figure out what to do about it. Sympathy isn’t required. Second, it should be clarified that acknowledging trauma as a possible cause of police violence doesn’t mean other causes, such as racism, machismo and power, poor training and use of force procedures, age, a dearth of education, complete lack of punishment, and so forth don’t exist and have devastating effects on society. (Another one is the human tendency to mistakenly see things you’re watching for. If you’re speeding and watching for cops, every other car begins to look like a cop. If you’re watching for guns or threatening movements from someone you’ve pulled over…) Finally, a discussion like this one isn’t meant to distract or deflect from the terrible trauma that victims of police violence live with for the rest of their lives. If there is a way we can stop one trauma from leading to another, we should pursue it.

We know officers’ experiences contribute to PTSD and other serious psychological and physiological problems. “Research has indicated that by the time police officers put on their uniform and begin general patrol, their stress-related cardiovascular reactivity is already elevated,” and this is followed by, generally speaking, “at least 900 potentially traumatic incidents over the course of their career.” Some officers will have bigger problems, if they came from the military and were traumatized in the bloodbath of war. Extreme stress and PTSD can lead to aggression and exaggerated startle response and recklessness; in police officers it’s been shown to lead to less control in decision-making “due to heightened arousal to threats, inability to screen out interfering information, or the inability to keep attention.” Academics in The Huffington Post and Psychology Today have connected occupational trauma to brutality, as have former officers on fervent pro-cop sites (for example, could reforms addressing trauma “reduce the number of inappropriate decisions some officers make? If we are concerned about the dysfunctional actions of some cops, is it possible that some of the fault lies with the rest of us who ignore the trauma that officers go through?”). More research would be valuable, but it’s a safe bet police trauma contributes to police brutality. (A connection also exists, by the way, between officer stress and violence against their romantic partners.)

This writer doesn’t have too much more to say on the matter — it simply seems important to connect the two ideas mentioned in the first paragraph, especially for those of us who care about justice and about encouraging others of very different views to care as well. “True, the police have dangerous jobs, but do you see how the extreme stress that most officers experience might make police brutality a serious problem? Perhaps there are other factors, too. Perhaps there are societal changes we can make that would address both officer PTSD or safety and police brutality against ordinary people.” It could be a way to build a bridge or find a sliver of common ground.

How to actually address such trauma will range wildly, of course, from the reactionary, though valid, sentiments from police departments about the need for more mental healthcare to the radical (“Radical simply means grasping things at the root,” Angela Davis) idea that we “Abolish the Police.” After all, no police means no police trauma. And no police brutality. Convincing people that trauma contributes to brutality seems far easier than agreement on how to solve these things.

This is a bit of an aside, but I’m still determining where I personally fall when it comes to what to specifically do about the police. I firmly believe that broad changes are needed concerning: who responds to certain nonviolent calls (it need not be quasi-soldiers, at least not as first responders); the allocation of resources, with reform devoting huge sums into addressing the root causes of crime, namely poverty, instead of into policing and other initiatives that only address the symptoms; the qualifications, education, training, evaluation, use of force procedures, and weaponry of those who respond to violent calls; what an individual can be pulled over or confronted or arrested for, just serious changes to law and policy; who investigates police misconduct (not the departments) and how abusive officers are punished, beginning with termination and blacklisting and ending with prison sentences; and much more. These things, perhaps combined with better mental healthcare and therapy, reduced hours, increased leave, shorter careers, and so forth for those facing traumatic situations, can reduce both the trauma and violence. (Although I don’t recall the specific incident, in the news a few years ago there was a report about how the officer who killed an unarmed black man in the evening had witnessed a murder or suicide that morning; taking him off duty seems like it would have been an obvious thing to do.) But I do suspect that modern societies will always have some traumatic situations and need individuals to enter them, whether it’s the police or something resembling the police. Perhaps more personal study is needed. I recently asked of my acquaintances:

I haven’t studied #PoliceAbolition or #PrisonAbolition theory with any depth. Currently, it seems likely to me that future human societies — more decent ones, with prosperity for all, unarmed response teams, restorative justice, and more — would still require some persons or groups authorized to use force against others in circumstances where de-escalation fails, and require some persons to be separated against their will from the general population, for the sake of its safety, during rehabilitation. These scenarios seem likely to be far rarer when we radically transform social conditions and societal policies, but not disappear completely. Can anyone recommend abolitionist literature that either 1) specifically makes the case that such circumstances would never occur and thus such force requirements are void, or 2) that argues such circumstances would indeed occur but specifically lays out how such requirements could be handled (force could be used) by alternative people or institutions without, over time, devolving back into something close to today’s police and prisons.

My mind may change as I go through some of the recommended readings, but as it stands I wonder if the number of individuals authorized to use force, their trauma, and their brutality can only be greatly reduced, rather than eradicated completely. While a better human society is possible and will be won, a perfect one may be out of reach.

On Student Teaching

I am now two weeks from concluding my first student teaching placement (Visitation School), and my classroom management skills are still being refined. After observing for five days, slowly beginning my integration into a leadership role, I took over completely from my cooperating teacher. While excited to start, initially I had a couple days where I found one 6th grade class (my homeroom) difficult to control. There were times when other classes stepped out of line, naturally, but the consistency with which my homeroom became noisy and rowdy was discouraging.

“They’re your homeroom,” my cooperating teacher reminded me. “They feel more at home in your classroom, and will try to get away with more.”

There were a few instances where students took someone else’s property, or wrote notes to classmates, but the side chatter was the major offense. I would be attempting to teach and each table would have at least someone making conversation, which obviously distracts both those who wish to pay attention and those who don’t care. I would ask them to refocus and quiet themselves, which would work for but a few precious moments. There was one day I remember I felt very much as if the students were controlling me, rather than the other way around, and I made the mistake of hesitating when I could have doled out consequences. I spoke to my cooperating teacher about it during our feedback session, and she emphasized to me that I needed to prove to the students my willingness to enforce the policies, that I have the same authority as any other teacher in the building.

At Visitation, their classroom management system revolves around “tallies,” one of which equals three laps at recess before one can begin play. My homeroom deserved a tally the day I hesitated. I needed to come up with a concrete, consistent way of disciplining disruptive behavior. So I went home and developed a simple system I had thought about a long time ago: behavior management based on soccer. I cut out and laminated a yellow card and a red card. The next day, I sat each class down in the hall before they entered the room, and told them the yellow card would be shown to them as a warning, the red card as tallies. These could be given individually or as a class, and, like soccer, a red card could be given without a yellow card.

The students were surprisingly excited about this. Perhaps turning punishment into a game intrigued them; regardless, it made me wonder if this would work. But it seemed discussing the expectations I had of them, and the enforcement of such expectations, helped a good deal. Further, I was able to overcome my hesitation that day and dole out consequences for inappropriate behavior. My homeroom I gave a yellow card and then a red card, and they walked laps the next day.

My cooperating teacher noted the system would be effective because it was visual for the students. I also found that it allowed me to easily maintain emotional control; instead of raising my voice, I simply raised a card in my hand, and the class refocused. Its visibility allowed me to say nothing at all.

While containing a different purpose and practice, this system draws important elements from the Do It Again system educator Doug Lemov describes, including no administrative follow-up and logical consequences, but most significantly group accountability (Lemov, 2010, p. 192). It holds an entire class responsible for individual actions, and “builds incentives for individuals to behave positively since it makes them accountable to their peers as well as their teacher” (p. 192). Indeed, my classes almost immediately started regulating themselves, keeping themselves accountable for following my expectations (telling each other to be quiet and settle down, for instance, before I had to say anything).

Lemov would perhaps frown upon the yellow card, and point to the behavioral management technique called No Warning (p. 199). He suggests teachers:

  • Act early. Try to see the favor you are doing kids in catching off-task behavior early and using a minor intervention of consequence to prevent a major consequence later.
  • Act reliably. Be predictably consistent, sufficient to take the variable of how you will react out of the equation and focus students on the action that precipitated your response.
  • Act proportionately. Start small when the misbehavior is small; don’t go nuclear unless the situation is nuclear.

I have tried to follow these guidelines to the best of my ability, but Lemov would say the warning is not taking action, only telling students “a certain amount of disobedience will not only be tolerated but is expected” (p. 200). He would say students will get away with what they can until they are warned, and will only refocus and cease their side conversations afterwards. Lemov makes a valid point, and I have indeed seen this happen to a degree. As a whole, however, the system has been effective, and most of my classes do not at all take advantage of their warning. Knowing they can receive a consequence without a warning has helped, perhaps. After a month of using the cards, I have given my homeroom a red card three times. In my other five classes combined during the same period, there have been two yellows and only one red. I have issued a few individual yellows, but no reds.

Perhaps it is counterproductive to have a warning, but I personally feel that since the primary focus of the system is on group accountability, I need to give talkative students a chance to correct their behavior before consequences are doled out for the entire class. Sometimes a reminder is necessary, the reminder that their actions affect their classmates and that they need to refocus. I do not want to punish the students who are not being disruptive along with those who are without issuing some sort of warning that they are on thin ice.

 

 

During my two student teaching placements this semester, I greatly enjoyed getting to know my students. It was one of the more rewarding aspects of teaching. Introducing myself and my interests in detail on the first day I arrived proved to be an excellent start; I told them I liked history, soccer, drawing, reading, etc. Building relationships was easy, as students seemed fascinated by me and had an endless array of questions about who I was and where I came from.

Art is something I used to connect with students. At both my schools, the first students I got to know were the budding artists, as I was able to observe them sketching in the corners of their notebooks and later ask to see their work. There was one girl at my first placement who drew a new breed of horse on the homeroom whiteboard each morning; a boy at my second placement was drawing incredible fantasy figures every spare second he had. I was the same way when I was their age, so naturally I struck up conversations about their pictures. I tried to take advantage of such an interest by asking students to draw posters of Hindu gods or sketch images next to vocabulary words to aid recall. Not everyone likes to draw, but I like to encourage the skill and at least provide them an opportunity to try. Beyond this, I would use what novels students had with them to learn about their fascinations and engage them, and many were excited I knew The Hunger Games, The Hobbit, and The Lord of the Rings. We would discuss our favorite characters and compare such fiction to recent films.

For all my students, I strove to engage them each day with positive behavior, including greeting them by name at the door, drawing with and for them, laughing and joking with them, maintaining a high level of interest in what students were telling me (even if they rambled aimlessly, as they had the tendency to do) and even twice playing soccer with them at recess. The Catholic community of my first placement also provided the chance to worship and pray with my kids, an experience I will not forget.

One of my successes was remaining emotionally cool, giving students a sense of calm, confidence, and control about me. Marzano (2007) writes, “It is important to keep in mind that emotional objectivity does not imply being impersonal with or cool towards students. Rather, it involves keeping a type of emotional distance from the ups and downs of classroom life and not taking students’ outbursts or even students’ direct acts of disobedience personally” (p. 152). Even when I was feeling control slipping away from me, I did my best to be calm, keep my voice low, and correct students in a respectful manner that reminded them they had expectations they needed to meet. Lemov (2010) agrees, writing, “An emotionally constant teacher earns students’ trust in part by having them know he is always under control. Most of all, he knows success is in the long run about a student’s consistent relationship with productive behaviors” (p. 219). Building positive relationships required mutual respect and trust, and emotional constancy was key.

Another technique I emphasized was the demonstration of my passion for social studies, to prove to them the gravity of my personal investment in their success. One lesson from my first placement covered the persecution of Anne Hutchinson in Puritan America; we connected it to modern sexism, such as discrimination against women in terms of wage earnings. Another lesson was about racism, how it originated as a justification for African slavery and how the election of Barack Obama brought forth a surge of openly racist sentiment from part of the U.S. citizenry. I told them repeatedly that we studied history to become dissenters and activists, people who would rise up and destroy sexism and racism. I told them I had a personal stake in their understanding of such material, a personal stake in their future, because they were the ones responsible for changing our society in positive ways. Being the next generation, ending social injustices would soon be up to them.

Marzano (2007) says, “Arguably the quality of the relationships teachers have with students is the keystone of effective management and perhaps even the entirety of teaching” (p. 149). In my observation experiences, I saw burnt out and bitter teachers, who focused their efforts on authoritative control and left positive relationship-building on the sideline. The lack of strong relationships usually meant more chaotic classrooms and more disruptive behavior. As my career begins, I plan to make my stake in student success and my compassion for each person obvious, and stay in the habit.

 

References:

Lemov, D. (2010). Teach like a champion: 49 techniques that put students on the path to college. San Francisco, CA: Jossey-Bass.

Marzano, R. (2007). The art and science of teaching. Alexandria, VA: Association for Supervision and Curriculum Development.

Cop Car Explodes, Police Pepper Spray Passenger in Moving Vehicle During Plaza Protests

The events of 10:00pm to midnight on May 30, 2020 on Kansas City’s Plaza — protests and unrest following the police killing of George Floyd in Minneapolis — included the following. 

The police, in riot gear and gas masks, blockaded the intersections along Ward Parkway, refusing to allow newcomers, additional protesters, to move deeper into the Plaza, angering a small but growing crowd. “Let us through!” Journalists likewise were not allowed to enter. From the vantage point at the blockade, it was clear a gathering of protesters was locked in a standoff with police up around 47th and Wyandotte Street. The sound of helicopters, sirens, police radios and bullhorns, and protesters’ shouts clashed in the air. Sharp pops. The protesters inside fled west as one, as police dispersed tear gas. Much concern was voiced from the crowd at the barrier.

After a time, an explosion rocked the Plaza. “Shit!” exclaimed members of the crowd, among variations — and even the police could not help but turn their heads away from the masses and look. It appeared a parked car, near where the standoff occurred, had been firebombed. The press later indicated it was a police car. “It’s going down, boy,” someone said. Flames and smoke rose high, and shortly thereafter fire fighters arrived. Meanwhile, a man, tall and skinny, yelled at the police at the barrier, saying he was a veteran who fought for the rights the police trample upon — “You’re a fucking disgrace.” Two women likewise unleashed their anger.

Walking west along Ward Parkway, in an attempt to follow the group of runners from afar, revealed a bridal shop window smashed. Some jokes from observers about black people wanting to get married tonight — though there did not appear to be anything looted. A young woman and man huddled together nearby, the woman distraught over the scene. Soon the pair entered the store through the front door, quickly followed by a shouting cop. “She owns the place, man, it’s all right,” the observers said. The pair echoed this, and the cop recommended finding someone to board up the window. Various other storefronts were boarded up, in advance, along the street.

“I’m just trying to get to my fucking car,” a passerby said to an acquaintance, realizing he could not enter the parking garage due to the blockades. In the street, gas canisters, COVID-19 masks, abandoned signs, water bottles, graffiti. Another broken storefront window, more graffiti. A fire department vehicle with a smashed windshield. A black woman thanking a cop for being out tonight doing his job.

Reaching Broadway, where one could finally turn north, showed a few people arrested and sitting on the pavement outside the Capital Grille at the feet of the police. They did not seem a part of the fleeing protesters, and may have been taken out of their cars, which were along the street, doors open. Moving north, one met the protesters, now all scattered and disjointed, many moving south but some further west and some simply hanging out here and there. The faint sting of tear gas infected the eyes. Strangers made sure one was all right.

“H&M!” a man hollered triumphantly, a valuable bundle in his hands, before three cops on bikes appeared from nowhere, sirens blasting. The man and several other looters sprinted south down Broadway, pursued.

The central Plaza secured, the main confrontation point became the blockade where the crowd witnessed the car explosion, Ward Parkway and Wyandotte. The group grew considerably, to a few hundred, swelled by the protesters that had fled the tear gas a block north. It was young, diverse. The ranks of police were reinforced as well.

Protesters gathered in Ward Parkway, signs held high: “I Can’t Breathe,” “Black Lives Matter.” A few cars zipped around wildly in circles, as if to emphasize the protesters’ control of the street. A white car with four or five people in it pulled up and distributed water, while also providing the tunes. A dance circle formed for a time, while both sides held their ground. Skateboards, scooters shot by. A more festive atmosphere. A chant began — no justice, no peace. But mostly individuals had their say — calls for an end to police killings and abuse.

Eventually the police ordered the protesters to clear the streets and return to the sidewalks or face arrest. The street was full of people, but most were already there. The police seemed to select one individual to make an example of, and surged toward a white man with a sign, arresting him. Their orders ignored, the police pressed forward. Someone threw a water bottle at them. The police shook their gas cans ominously. “Scary ass motherfuckers,” a young woman said. Another woman was arrested. A man hollered, “The police started as slave-catchers! Not much has changed.” “You don’t have to do what your superiors say,” someone called out. Some taunted the black officers, the so-labeled “Uncle Toms.”

The police surged forward, pepper spray raised. A protester threw a brick or rock at them as everyone scrambled in retreat, by foot, scooter, or vehicle. The white car that had delivered water was in trouble, needing to back up toward the police in order to get out of its space and flee. Several officers walked up to the vehicle menacingly. “They’re going, they’re going!” shouted protesters. “Leave them alone!” An officer sprayed into the face of someone in the back seat as the vehicle backed up and lurched forward, the driver clearly panicked.

After pushing their line forward, the police then retreated back to their original position. The crowd then began moving forward, back to theirs.

The police announced that gas would be used if the crowd did not disperse, which the crowd had no interest in doing. The hiss of gas pierced the night air as cans were thrown, grey smoke billowing and streaking behind them. Pandemonium. Screams and shouts as all turned and ran, except for one brave soul who threw a can back. The tear gas burned, blinded. The police, marching forward, were quickly obscured, swallowed by smoke and distance, as the protesters splintered into three masses and fled east, south, and west.

The tear gas appeared to end the Plaza protest — by midnight the crowd had not reformed. However, a woman, leaning out the passenger window of a car moving down Ward Parkway, called out, “We’re going to Westport!”

The time is 3:40am on Sunday, May 31, 2020. Three of the four officers involved in George Floyd’s death have yet to be arrested.

Capitalism and Coronavirus

A collection of thoughts on capitalist society during the 2020 COVID-19 outbreak:

On Necessity

The coronavirus makes clear more than ever why America needs socialism.

  • Many people don’t have paid sick leave and can’t go without a paycheck, so they go to work sick with the virus, spreading it. Workers should own their workplaces so they can decide for themselves whether they get paid sick leave.
  • Businesses are closing, leaving workers to rot, with no income but plenty of bills to pay. People forced to go into work have to figure out how to pay for childcare, since schools are closed. Kids are hungry because they rely on school for meals. We need a Universal Basic Income.
  • Without health insurance, lots of people won’t get tested or treated because they can’t afford it. There will be more people infected. There will be many senseless, avoidable deaths. We need universal healthcare, medical care for all people.
  • The bold steps needed to address this crisis won’t be taken, even if the majority of Americans want it to be so, because our representatives serve the interests of wealthy and corporate funders. We need participatory democracy, where the people have decision-making power.

This virus shines a glaring, painful light at the stupidities of free market capitalism, which is at this very moment encouraging the spread of a deadly disease and spelling financial ruin for ordinary people.

The current crisis screams for the need to build a new world.

On Purity

Imagine a deadly virus (this one or far worse) in a truly free market society:

  • Many businesses (and perhaps schools, all private) choosing to stay open to make profits, spreading the contagion. No closure orders.
  • As other businesses choose to close, and workers everywhere refuse to work, paychecks and jobs vanish, with no government unemployment or stimulus checks to help. Aid from nonprofits and foundations, donations from individuals and businesses, is all a hopeless drop in the ocean relative to the need.
  • No bailouts and stimulus funds for businesses. Small and large companies alike collapsing — worsening unemployment. Monopolization increases faster.
  • Infected persons dying because they can’t afford testing, treatment, or healthcare coverage (think the U.S.) in general. Healthcare providers have to profit, there are no free lunches — there’s no government aid on its way. Restricted access to healthcare for citizens, through low income or job (benefit) loss, means a faster spread of the virus.
  • Would a government devoted to a fully free market society issue stay-at-home orders? If not, more people out and about, a wider spread.

A truly free market would make any pandemic a thousand times worse. A higher body count, a worse economic disaster.

On Distribution

Grocery stores are currently reminding us how slowly the law of supply-and-demand can function.

On Redistribution

In theory, seizing all wealth from the rich and redistributing it to the masses may be the only way to prevent societal collapse during a pandemic (whether this one or a far deadlier one).

80% of Americans possess less than 15% of the wealth in this country, just drastic inequality. If a pandemic leads to mass closings of workplaces and the eradication of jobs, the State must step in to support the people and subsidize incomes. Without this, people lose access to food, water, housing, everything, and disaster ensues. However, in such a situation, government revenues will fall — less individual and corporate income to tax, sales tax revenue dwindling as people buy less, and so on. It is conceivable that the State, during a plague lasting years, would eventually lack the funds it needs. Solutions like borrowing from other nations might prove impossible, if the pandemic is global and other nations are experiencing the same shortfalls. The only solution may be to tax the rich (and wealthy, non-essential corporations) out of existence, allowing the State to continue supporting people.

(This may only stave off disaster, however. There will be diminishing returns if taxes on essential companies and landlords are too low. State money would be given to people, who would give it to a businesses, which would only give small portions back to the State. The situation would likely then require appropriating most or even all of the revenue received from businesses that are still operating, and sending it back to the consumers.)

On Insanity

A pandemic causing people to lose their healthcare (via job or income loss)… Insane.

On Collapse

During the COVID-19 crisis, we’ve seen jokes about how prosperous corporations suddenly on the verge of bankruptcy really should have been more careful with their money — buying less avocado toast, for instance. Having funds set aside for emergencies, taking on less debt, etc. Then they wouldn’t have gone from prosperous to desperate after mere weeks of fewer customers.

But businesses keeping next to nothing in the bank is inherent to capitalism. This is not exclusively the case, as some firms do see the wisdom of keeping cash reserves for hard times and large corporations do grow rich enough and monopolize markets enough to focus on stockpiling cash, but it is a general trend of the system. In the frenzied competition of the market, keeping money stored away is generally a competitive disadvantage. Every extra dime must be poured back into the business to keep growing, keep gaining market share, keep displacing competitors. If you’re not injecting everything back into the business, you risk falling behind and being crushed by the firms that are.

“It can’t wait,” John Steinbeck wrote in The Grapes of Wrath. “It’ll die… When the monster stops growing, it dies. It can’t stay one size.”

The competition that pushes firms forward in ordinary times can be their downfall in times of economic crisis.

Bernie Will Win Iowa

Predicting the future isn’t something I make a habit of. It is a perilous activity, always involving a strong chance of being wrong and looking the fool. Yet sometimes, here and there, conditions unfold around us in a way that gives one enough confidence to hazard a prediction. I believe that Bernie Sanders will win Iowa today.

First, consider that Bernie is at the top of the polls. Polls aren’t always reliable predictors, and he’s neck-and-neck with an opponent in some of them, but it’s a good sign.

Second, Bernie raised the most money in Q4 of 2019 by far, a solid $10 million more than the second-place candidate, Pete Buttigieg. He has more individual donations at this stage than any candidate in American history, has raised the most overall in this campaign, and is among the top spenders in Iowa. (These analyses exclude billionaire self-funders Bloomberg and Steyer, who have little real support.) As with a rise in the polls, he has momentum like no one else.

Third, Bernie is the only candidate in this race who was campaigning in Iowa in 2016, which means more voter touches and repeat voter touches. This is Round 2 for him, an advantage — everyone else is in Round 1.

Next, don’t forget, Iowa in 2016 was nearly a tie between Bernie and Hillary Clinton. It was the closest result in the state’s caucus history; Hillary won just 0.3% more delegate equivalents. It’s probably safe to say Bernie is more well-known today, four years later — if he could tie then, he can win now.

Fifth, in Iowa in 2016, there were essentially two voting blocs: the Hillary Bloc and the Bernie Bloc. (There was a third but insignificant candidate.) These are the people who actually show up to caucus — what will they do now? I look at the Bernie Bloc as probably remaining mostly intact. He may lose some voters to Warren or others, as this field has more progressive options than last time, but I think his supporters’ fanatical passion and other voters’ interest in the most progressive candidate will mostly keep the Bloc together. The Hillary Bloc, of course, will be split between the many other candidates — leaving Bernie the victor. (Even if there is much higher turnout than in 2016, I expect the multitude of candidates to aid Bernie — and many of the new voters will go to him, especially if they’re young. An historic youth turnout is expected, and they mostly back Sanders.)

This last one is simply anecdotal. All candidates have devoted campaigners helping them. But I must say it. The best activists I know are on the case. They’ve put their Kansas City lives on hold and are in Iowa right now. The Kansas City Left has Bernie’s back, and I believe in them.

To victory, friends.

The Enduring Stupidity of the Electoral College

To any sensible person, the Electoral College is a severely flawed method of electing our president. Most importantly, it is a system in which the less popular candidate — the person with fewer votes — can win the White House. That absurdity would be enough to throw the Electoral College out and simply use a popular vote to determine the winner. Yet there is more.

It is a system where your vote becomes meaningless, giving no aid to your chosen candidate, if you’re in your state’s political minority; where small states have disproportionate power to determine the winner; where white voters have disproportionate decision-making power compared to voters of color; and where electors, who are supposed to represent the majority of voters in each state, can change their minds and vote for whomever they please. Not even its origins are pure, as slavery and the desire to keep voting power away from ordinary people were factors in its design.

Let’s consider these problems in detail. We’ll also look at the threadbare attempts to justify them.

The votes of the political minority become worthless, leading to a victor with fewer votes than the loser

When we vote in presidential elections, we’re not actually voting for the candidates. We’re voting on whether to award decision-making power to Democratic or Republican electors. 538 people will cast their votes and the candidate who receives a majority of 270 votes will win. The electors are chosen by the political parties at state conventions, through committees, or by the presidential candidates. It depends on the state. The electors could be anyone, but are usually involved with the parties or are close allies. In 2016, for instance, electors included Bill Clinton and Donald Trump, Jr. Since they are chosen for their loyalty, they typically (though not always, as we will see) vote for the party that chose them.

The central problem with this system is that most all states are all-or-nothing when electors are awarded. (Only a couple states, Maine and Nebraska, have acted on this unfairness and divided up electors based on their popular votes.) As a candidate, winning by a single citizen vote grants you all the electors from the state. 

Imagine you live in Missouri. Let’s say in 2020 you vote Republican, but the Democratic candidate wins the state; the majority of Missourians voted Blue. All of Missouri’s 10 electors are then awarded to the Democratic candidate. When that happens, your vote does absolutely nothing to help your chosen candidate win the White House. It has no value. Only the votes of the political majority in the state influence who wins, by securing electors. It’s as if you never voted at all — it might as well have been 100% of Missourians voting Blue. As a Republican, wouldn’t you rather have your vote matter as much as all the Democratic votes in Missouri? For instance, 1 million Republican votes pushing the Republican candidate toward victory alongside the, say, 1.5 million Democratic votes pushing the Democratic candidate forward? Versus zero electors for the Republican candidate and 10 electors for the Democrat?

In terms of real contribution to a candidate’s victory, the outcomes can be broken down, and compared to a popular vote, in this way:

State Electoral College victor: contribution (electors)
State Electoral College loser: no contribution (no electors)

State popular vote victor: contribution (votes)
State popular vote loser: contribution (votes)

Under a popular vote, however, your vote won’t become meaningless if you’re in the political minority in your state. It will offer an actual contribution to your favored candidate. It will be worth the same as the vote of someone in the political majority. The Electoral College simply does not award equal value to each vote (see more examples below), whereas the popular vote does, by allowing the votes of the political minority to influence the final outcome. That’s better for voters, as it gives votes equal power. It’s also better for candidates, as the loser in each state would actually get something for his or her efforts. He or she would keep the earned votes, moving forward in his or her popular vote count. Instead of getting zero electors — no progress at all.

But why, one may ask, does this really matter? When it comes to determining who wins a state and gets its electors, all votes are of equal value. The majority wins, earning the right to give all the electors to its chosen candidate. How exactly is this unfair?

It’s unfair because, when all the states operate under such a system, it can lead to the candidate with fewer votes winning the White House. It’s a winner-take-all distribution of electors, each state’s political minority votes are ignored — but those votes can add up. 66 million Americans may choose the politician you support, but the other candidate may win with just 63 million votes. That’s what happened in 2016. It also happened in the race of 2000, as well as in 1876 and 1888. It simply isn’t fair or just for a candidate with fewer votes to win. It is mathematically possible, in fact, to win just 21.8% of the popular vote and win the presidency. While very unlikely, it is possible. That would mean, for example, a winner with 28 million votes and a loser with 101 million! This is absurd and unfair on its face. The candidate with the most votes should be the victor, as is the case with every other political race in the United States, and as is standard practice among the majority of the world’s democracies.

The lack of fairness and unequal value of citizen votes go deeper, however.

Small states and white power

Under the Electoral College, your vote is worth less in big states. For instance, Texas, with 28.7 million people and 38 electors, has one elector for every 755,000 people. But Wyoming, with 578,000 people and 3 electors, has one elector for every 193,000 people. In other words, each Wyoming voter has a bigger influence over who wins the presidency than each Texas voter. 4% of the U.S. population, for instance, in small states, has 8% of the electors. Why not 4%, to keep votes equal? (For those who think all this was the intent of the Founders, to give more power to smaller states, we’ll address that later on.)

To make things even, Texas would need many more electors. As would other big states. You have to look at changing population data and frequently adjust electors, as the government is supposed to do based on the census and House representation — it just doesn’t do it very well. It would be better to do away with the Electoral College entirely, because under a popular vote the vote of someone from Wyoming would be precisely equal to the vote of a Texan. Each would be one vote out of the 130 million or so cast. No adjustments needed.

It also just so happens that less populous states tend to be very white, and more populous states more diverse, meaning disproportionate white decision-making power overall.

Relatedly, it’s important to note that the political minority in each state, which will become inconsequential to the presidential race, is sometimes dominated by racial minorities, or at least most voters of color will belong to it. As Bob Wing writes, because “in almost every election white Republicans out-vote [blacks, most all Democrats] in every Southern state and every border state except Maryland,” the “Electoral College result was the same as if African Americans in the South had not voted at all.”

Faithless electors

After state residents vote for electors, the electors can essentially vote for whomever they want, in many states at least. “There are 32 states (plus the District of Columbia) that require electors to vote for a pledged candidate. Most of those states (19 plus DC) nonetheless do not provide for any penalty or any mechanism to prevent the deviant vote from counting as cast. Four states provide a penalty of some sort for a deviant vote, and 11 states provide for the vote to be canceled and the elector replaced…”

Now, electors are chosen specifically because of their loyalty, and “faithless electors” are extremely rare, but that doesn’t mean they will always vote for the candidate you elected them to vote for. There have been 85 electors in U.S. history that abstained or changed their vote on a whim. Sometimes for racist reasons, on accident, etc. Even more changed their votes after a candidate died — perhaps the voters would have liked to select another option themselves. Even if rare, all this should not be possible or legal. It is yet another way the Electoral College has built-in unfairness — imagine the will of a state’s political majority being ignored.

* * *

Won’t a popular vote give too much power to big states and cities?

Let’s turn now to the arguments against a popular vote, usually heard from conservatives. A common one is that big states, or big cities, will “have too much power.” Rural areas and less populous states and towns will supposedly have less.

This misunderstands power. States don’t vote. Cities don’t vote. People do. If we’re speaking solely about power, about influence, where you live does not matter. The vote of someone in Eudora, Kansas, is worth the same as someone in New York, New York.

This argument is typically posited by those who think that because some big, populous states like California and New York are liberal, this will mean liberal rule. (Conservative Texas, the second-most populous state, and sometimes-conservative swing states like Florida [third-most populous] and Pennsylvania [fifth-most populous] are ignored.) Likewise, because a majority of Americans today live in cities, and cities tend to be more liberal than small towns, this will result in the same. The concern for rural America and small states is really a concern for Republican power.

But obviously, in a direct election each person’s vote is of equal weight and importanceregardless and independent of where you live. 63% of Americans live in cities, so it is true that most voters will be living and voting in cities, but it cannot be said the small town voter has a weaker voice than the city dweller. Their votes have identical sway over who will be president. In the same way, a voter in a populous coastal state has no more influence than one in Arkansas.

No conservative looks with dismay at the direct election of his Democratic governor or congresswoman and says, “She only won because the small towns don’t have a voice. We have to find a way to diminish the power of the big cities!” No one complains that X area has too many people and too many liberals and argues some system should fix this. No one cries, “Tyranny of the majority! Mob rule!” They say, “She got the most votes, seems fair.” Why? Because one understands that the vote of the rural citizen is worth the same as the vote of an urban citizen, but if there happens to be more people living in cities in your state, or if there are more liberals in your state, so be it. That’s the freedom to live where you wish, believe what you wish, and have a vote worth the same as everyone else’s.

Think about the popular vote in past elections. About half of Americans vote Republican, about half vote Democrat. One candidate gets a few hundred thousand or few million more. It will be exactly the same if the popular vote determined the winner rather than the Electoral College — where you live is irrelevant. What matters is the final vote tally.

It’s not enough to simply complain that the United States is too liberal. And therefore we must preserve the Electoral College. That’s really what this argument boils down to. It’s not an argument at all. Unfair structures can’t be justified because they serve one political party. Whoever can win the most American votes should be president, no matter what party they come from.

But won’t candidates only pander to big states and cities?

This is a different question, and it has merit. It is true that where candidates campaign will change with the implementation of a popular vote. Conservatives warn that candidates will spend most of their time in the big cities and big states, and ignore rural places. This is likely true, as candidates (of both parties) will want to reach as many voters as possible in the time they have to garner support.

Yet this carries no weight as an argument against a popular vote, because the Electoral College has a very similar problem. Candidates focus their attention on swing states.

There’s a reason Democrats don’t typically campaign very hard in conservative Texas and Republicans don’t campaign hard in liberal California. Instead, they campaign in states that are more evenly divided ideologically, states that sometimes go Blue and sometimes go Red. They focus also on swing states with a decent number of electors. The majority of campaign events are in just six states. Unless you live in one of these places, like Ohio, Florida, or Pennsylvania, your vote isn’t as vital to victory and your state won’t get as much pandering. The voters in swing states are vastly more important, their votes much more valuable than elsewhere.

How candidates focusing on a handful of swing states might be so much better than candidates focusing on more populous areas is never explained by Electoral College supporters. It seems like a fair trade, but with a popular vote we also get the candidate with the most support always winning, votes of equal worth, and no higher-ups to ignore the will of the people.

However, with a significant number of Americans still living outside big cities, attention will likely still be paid to rural voters — especially, one might assume, by the Republican candidate. Nearly 40% of the nation living in small towns and small states isn’t something wisely ignored. Wherever the parties shift most of their attention, there is every reason to think Blue candidates will want to solidify their win by courting Blue voters in small towns and states, and Red candidates will want to ensure theirs by courting Red voters in big cities and states. Even if the rural voting bloc didn’t matter and couldn’t sway the election (it would and could), one might ask how a handful of big states and cities alone determining the outcome of the election is so much worse than a few swing states doing the same in the Electoral College system.

Likewise, the fear that a president, plotting reelection, will better serve the interests of big states and cities seems about as reasonable as fear that he or she would better serve the interests of the swing states today. One is hardly riskier than the other.

But didn’t the Founders see good reason for the Electoral College?

First, it’s important to note that invoking the Founding Fathers doesn’t automatically justified flawed governmental systems. The Founders were not perfect, and many of the policies and institutions they decreed in the Constitution are now gone.

Even before the Constitution, the Founders’ Articles of Confederation were scrapped after just seven years. Later, the Twelfth Amendment got rid of a system where the losing presidential candidate automatically became vice president — a reform of the Electoral College. Our senators were elected by the state legislatures, not we the people, until 1913 (Amendment 17 overturned clauses from Article 1, Section 3 of the Constitution). Only in 1856 did the last state, North Carolina, do away with property requirements to vote for members of the House of Representatives, allowing the poor to participate. The Three-Fifths Compromise (the Enumeration Clause of the Constitution), which valued slaves less than full people for political representation purposes, is gone, and today people of color, women, and people without property can vote thanks to various amendments. There were no term limits for the president until 1951 (Amendment 22) — apparently an executive without term limits didn’t give the Founders nightmares of tyranny.

The Founders were very concerned about keeping political power away from ordinary people, who might take away their riches and privileges. They wanted the wealthy few, like themselves, to make the decisions. See How the Founding Fathers Protected Their Own Wealth and Power.

The Electoral College, at its heart, was a compromise between Congress selecting the president and the citizenry doing so. The people would choose the people to choose the president. Alexander Hamilton wrote that the “sense of the people should operate in the choice of the person to whom so important a trust was to be confided.” He thought “a small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.”

Yet the Founders did not anticipate that states would pass winner-take-all elector policies, and some wanted it abolished. The Constitution and its writers did not establish such a mechanism. States did, and only after the Constitution, which established the Electoral College, was written. In 1789, only three states had such laws, according to the Institute for Research on Presidential Elections. It wasn’t until 1836 that every state (save one, which held out until after the Civil War) adopted a winner-take-all law; they sought more attention from candidates by offering all electors to the victor, they wanted their chosen sons to win more electors, and so forth. Before (and alongside) the winner-take-all laws, states were divided into districts and the people in each district would elect an elector (meaning a state’s electors could be divided up among candidates). Alternatively, state legislatures would choose the electors, meaning citizens did not vote for the president in any way, even indirectly! James Madison wrote that “the district mode was mostly, if not exclusively in view when the Constitution was framed and adopted; & was exchanged for the general ticket [winner-take-all] & the legislative election” later on. He suggested a Constitutional amendment (“The election of Presidential Electors by districts, is an amendment very proper to be brought forward…”) and Hamilton drafted it.

Still, among Founders and states, it was an anti-democratic era. Some Americans prefer more democratic systems, and don’t cling to tradition — especially tradition as awful and unfair as the Electoral College — for its own sake. Some want positive changes to the way government functions and broadened democratic participation, to improve upon and make better what the Founders started, as we have so many times before.

Now, it’s often posited that the Founding Fathers established the Electoral College to make sure small states had more power to determine who won the White House. As we saw above, votes in smaller states are worth more than in big ones.

Even if the argument that “we need the Electoral College so small states can actually help choose the president” made sense in a bygone era where people viewed themselves as Virginians or New Yorkers, not Americans (but rather as part of an alliance called the United States), it makes no sense today. People now see themselves as simply Americans — as American citizens together choosing an American president. Why should where you live determine the power of your vote? Why not simply have everyone’s vote be equal?

More significantly, it cannot be said that strengthening smaller states was a serious concern to the Founders at the Constitutional Convention. They seemed to accept that smaller states would simply have fewer voters and thus less influence. Legal historian Paul Finkleman writes that

in all the debates over the executive at the Constitutional Convention, this issue [of giving more power to small states] never came up. Indeed, the opposite argument received more attention. At one point the Convention considered allowing the state governors to choose the president but backed away from this in part because it would allow the small states to choose one of their own.

In other words, they weren’t looking out for the little guy. Political scientist George C. Edwards III calls this whole idea a “myth,” stressing: “Remember what the country looked like in 1787: The important division was between states that relied on slavery and those that didn’t, not between large and small states.”

Slavery’s influence

The Electoral College is also an echo of white supremacy and slavery.

As the Constitution was formed in the late 1780s, Southern politicians and slave-owners at the Convention had a problem: Northerners were going to get more seats in the House of Representatives (which were to be determined by population) if blacks weren’t counted as people. Southern states had sizable populations, but large portions were disenfranchised slaves and freemen (South Carolina, for instance, was nearly 50% black).

This prompted slave-owners, most of whom considered blacks by nature somewhere between animals and whites, to push for slaves to be counted as fully human for political purposes. They needed blacks for greater representative power for Southern states. Northern states, also seeking an advantaged position, opposed counting slaves as people. This odd reversal brought about the Three-Fifths Compromise most of us know, which determined an African American would be worth three-fifths of a person.

The Electoral College was largely a solution to the same problem. True, as we saw, it served to keep power out of the hands of ordinary people and in the hands of the elites, but race and slavery unquestionably influenced its inception. As the Electoral College Primer put it, Southerners feared “the loss in relative influence of the South because of its large nonvoting slave population.” They were afraid the direct election of the president would put them at a numerical disadvantage. To put it bluntly, Southerners were upset their states didn’t have more white people. A popular vote had to be avoided.

For example, Hugh Williamson of North Carolina remarked at the Convention, during debate on a popular election of the president: “The people will be sure to vote for some man in their own State, and the largest State will be sure to succede [sic]. This will not be Virga. however. Her slaves will have no suffrage.” Williamson saw that states with high populations had an advantage in choosing the president. But a great number of people in Virginia were slaves. Would this mean that Virginia and other slave states didn’t have the numbers of whites to affect the presidential election as much as the large Northern states?

The writer of the Constitution, slave-owner and future American president James Madison, thought so. He said that

There was one difficulty however of a serious nature attending an immediate choice by the people. The right of suffrage was much more diffusive in the Northern than the Southern States; and the latter could have no influence in the election on the score of the Negroes. The substitution of electors obviated this difficulty…

The question for Southerners was: How could one make the total population count for something, even though much of the population couldn’t vote? How could black bodies be used to increase Southern political power? Counting slaves helped put more Southerners in the House of Representatives, and now counting them in the apportionment of electors would help put more Southerners in the White House.

Thus, Southerners pushed for the Electoral College. The number of electors would be based on how many members of Congress each state possessed — which recall was affected by counting a black American as three-fifths of a person. Each state would have one elector per representative in the House, plus two for the state’s two senators (today we have 435 + 100 + 3 for D.C. = 538). In this way, the number of electors was still based on population (not the whole population, though, as blacks were not counted as full persons), even though a massive part of the America population in 1787 could not vote. The greater a state’s population, the more House reps it had, and thus the more electors it had. Southern electoral power was secure.

This worked out pretty well for the racists. “For 32 of the Constitution’s first 36 years, a white slaveholding Virginian occupied the presidency,” notes Akhil Reed Amar. The advantage didn’t go unnoticed. Massachusetts congressman Samuel Thatcher complained in 1803, “The representation of slaves adds thirteen members to this House in the present Congress, and eighteen Electors of President and Vice President at the next election.”

Tyrants and imbeciles

At times, it’s suggested that the electors serve an important function: if the people select a dangerous or unqualified candidate — like an authoritarian or a fool — to be the party nominee, the electors can pick someone else and save the nation. Hamilton said, “The process of election affords a moral certainty, that the office of President will never fall to the lot of any man who is not in an eminent degree endowed with the requisite qualifications.”

Obviously, looking at Donald Trump, the Electoral College is just as likely to put an immoral doofus in the White House than keep one out. Trump may not fit that description to you, but some day a candidate may come along who does. And since the electors are chosen for their loyalty, they are unlikely to stop such a candidate, even if they have the power to be faithless. We might as well simply let the people decide.

It is a strange thing indeed that some people insist a popular vote will lead to dictatorship, ignoring the majority of the world’s democracies that directly elect their executive officer. They have not plunged into totalitarianism. Popular vote simply doesn’t get rid of checks and balances, co-equal branches, a constitution, the rule of law, and other aspects of free societies. These things are not incompatible.

France has had direct elections since 1965 (de Gaulle). Finland since 1994 (Ahtisaari). Portugal since 1918 (Pais). Poland since 1990 (Wałęsa). Why aren’t these nations run by despots by now? Why do even conservative institutes rank nations like Ireland, Finland, and Austria higher up on a “Human Freedom Index” than the United States? How is this possible, if direct elections of the executive lead to tyranny?

There are many factors that cause dictatorship and ruin, but simply giving the White House to whomever gets the most votes is not necessarily one of them.

Modern motives

We close by stating the obvious. There remains strong support for the Electoral College among conservatives because it has recently aided Republican candidates like Bush (2000) and Trump (2016). If the GOP lost presidential elections due to the Electoral College after winning the popular vote, like the other party does, they’d perhaps see its unfair nature.

The popular vote, in an increasingly diverse, liberal country, doesn’t serve conservative interests. Republicans have won the popular vote just once since and including the 1992 election. Conservatives are worried that if the Electoral College vanishes and each citizen has a vote of equal power, their days are numbered. Better to preserve an outdated, anti-democratic system than benefits you than reform your platform and policies to change people’s minds about you and strengthen your support. True, the popular vote may serve Democratic interests. Fairness serves Democratic interests. But, unlike unfairness, which Republicans seek to preserve, fairness is what’s right. Giving the candidate with the most votes the presidency is what’s right.

An Absurd, Fragile President Has Revealed an Absurd, Fragile American System

The FBI investigation into Donald Trump, one of the most ludicrous and deplorable men to ever sit in the Oval Office, was a valuable lesson in just how precariously justice balances on the edge of a knife in the United States. The ease with which any president could obstruct or eliminate accountability for his or her misdeeds should frighten all persons regardless of political ideology.

Let’s consider the methods of the madness, keeping in mind that whether or not a specific president like Trump is innocent of crimes or misconduct, it’s smart to have effective mechanisms in place to bring to justice later executives that are guilty. The stupidity of the system could be used by a president of any political party. This must be rectified.

A president can fire those investigating him — and replace them with allies who could shut everything down

The fact the above statement can be written truthfully about an advanced democracy, as opposed to some totalitarian regime, is insane. Trump of course did fire those looking into his actions, and replaced them with supporters.

The FBI (not the Democrats) launched an investigation into Trump and his associates concerning possible collusion with Russia during the 2016 election and obstruction of justice, obviously justified given his and their suspicious behavior, some of which was connected to actual criminal activity, at least among Trump’s associates who are now felons. Trump fired James Comey, the FBI director, who was overseeing the investigation. Both Trump and his attorney Rudy Giuliani publicly indicated the firing was motivated by the Russia investigation; Comey testified Trump asked him to end the FBI’s look into Trump ally Michael Flynn, though not the overall Russia inquiry.

The power to remove the FBI director could be used to slow down an investigation (or shut it down, if the acting FBI director is loyal to the president, which Andrew McCabe was not), but more importantly a president can then nominate a new FBI director, perhaps someone more loyal, meaning corrupt. (Christopher Wray, Trump’s pick, worked for a law firm that did business with Trump’s business trust, but does not seem a selected devotee like the individuals you will see below, perhaps because by the time his installment came around the investigation was in the hands of Special Counsel Robert Mueller.) The Senate must confirm the nomination, but that isn’t entirely reassuring. The majority party could push through a loyalist, to the dismay of the minority party, and that’s it. Despite this being a rarity, as FBI directors are typically overwhelmingly confirmed, it’s possible. A new director could then end the inquiry.

Further, the president can fire the attorney general, the FBI director’s boss. The head of the Justice Department, this person has ultimate power over investigations into the president — at least until he or she is removed by said president. Trump made clear he was upset with Attorney General Jeff Sessions for recusing himself from overseeing the Russia inquiry because Sessions could have discontinued it. It was reported Trump even asked Sessions to reverse this decision! Sessions recused himself less than a month after taking office, a couple months before Comey was fired. For less than a month, Sessions could have ended it all.

Deputy Attorney General Rod Rosenstein, luckily no Trump lackey, was in charge after Sessions stepped away from the matter. It was Rosenstein who appointed Robert Mueller special counsel and had him take over the FBI investigation, after the nation was so alarmed by Comey’s dismissal. Rosenstein had authority over Mueller and the case (dodging a bullet when Trump tried to order Mueller’s firing but was rebuked by his White House lawyer; Trump could have rescinded statutes that said only the A.G. could fire the special counsel, with an explosive court battle over constitutionality surely following) until Trump fired Sessions and installed loyalist Matt Whitaker as Acting Attorney General. Whitaker is a man who

defended Donald Trump Jr.’s decision to meet with a Russian operative promising dirt on Hillary Clinton. He opposed the appointment of a special counsel to investigate Russian election interference (“Hollow calls for independent prosecutors are just craven attempts to score cheap political points and serve the public in no measurable way.”) Whitaker has called on Rod Rosenstein to curb Mueller’s investigation, and specifically declared Trump’s finances (which include dealings with Russia) off-limits. He has urged Trump’s lawyers not to cooperate with Mueller’s “lynch mob.”

And he has publicly mused that a way to curb Mueller’s power might be to deprive him of resources. “I could see a scenario,” he said on CNN last year, “where Jeff Sessions is replaced, it would [be a] recess appointment and that attorney general doesn’t fire Bob Mueller but he just reduces his budget to so low that his investigation grinds to almost a halt.”

Whitaker required no confirmation from the Senate. Like an official attorney general, he could have ended the inquiry and fired Robert Mueller if he saw “good cause” to do so, or effectively crippled the investigation by limiting its resources or scope. That did not occur, but it’s not hard to imagine Whitaker parroting Trump’s wild accusations of Mueller’s conflicts of interest, or whipping up some bullshit of his own to justify axing the special counsel. The same can be said of Bill Barr, who replaced Whitaker. Barr, who did need Senate confirmation, was also a Trump ally, severely endangering the rule of law:

In the Spring of 2017, Barr penned an op-ed supporting the President’s firing Comey. “Comey’s removal simply has no relevance to the integrity of the Russian investigation as it moves ahead,” he wrote. In June 2017, Barr told The Hill that the obstruction investigation was “asinine” and warned that Mueller risked “taking on the look of an entirely political operation to overthrow the president.” That same month, Barr met with Trump about becoming the president’s personal defense lawyer for the Mueller investigation, before turning down the overture for that job.

In late 2017, Barr wrote to the New York Times supporting the President’s call for further investigations of his past political opponent, Hillary Clinton. “I have long believed that the predicate for investigating the uranium deal, as well as the foundation, is far stronger than any basis for investigating so-called ‘collusion,’” he wrote to the New York Times’ Peter Baker, suggesting that the Uranium One conspiracy theory (which had by that time been repeatedly debunked) had more grounding than the Mueller investigation (which had not). Before Trump nominated him to be attorney general, Barr also notoriously wrote an unsolicited 19-page advisory memo to Rod Rosenstein criticizing the obstruction component of Mueller’s investigation as “fatally misconceived.” The memo’s criticisms proceeded from Barr’s long-held and extreme, absolutist view of executive power, and the memo’s reasoning has been skewered by an ideologically diverse group of legal observers.

What happy circumstances, Trump being able to shuffle the investigation into his own actions to his first hand-picked attorney general (confirmation to recusal: February 8 to March 2, 2017), an acting FBI director (even if not an ally, the act itself is disruptive), a hand-picked acting attorney general, and a second hand-picked attorney general. Imagine police detectives are investigating a suspect but he’s their boss’ boss. That’s a rare advantage.

The nation held its breath with each change, and upon reflection it seems almost miraculous Mueller’s investigation concluded at all. Some may see this as a testament to the strength of the system, but it all could have easily gone the other way. There were no guarantees. What if Sessions hadn’t recused himself? What if he’d shut down the investigation? What if Comey, McCabe, or Rosenstein had been friendlier to Trump? What if Whitaker or Barr had blown the whole thing up? Yes, political battles, court battles, to continue the inquiry would have raged — but there are no guarantees they would have succeeded.

Tradition, political and public pressure…these mechanisms aren’t worthless, but they hardly seem as valuable as structural, legal changes to save us from having to simply hope the pursuit of justice doesn’t collapse at the command of the accused or his or her political allies. We can strip the president of any and all power over the Justice Department workers investigating him or her, temporarily placing A.G.s under congressional authority, and eradicate similar conflicts of interest.

The Department of Justice can keep its findings secret

Current affairs highlighted this problem as well. When Mueller submitted his finished report to Bill Barr, the attorney general was only legally required to submit a summary of Mueller’s findings to Congress. He did not need to provide the full report or full details to the House and Senate, much less to the public. He didn’t even need to release the summary to the public!

This is absurd, obviously setting up the possibility that a puppet attorney general might not tell the whole story in the summary to protect the president. Members of Mueller’s team are currently saying to the press that Barr’s four-page summary is too rosy, leaving out damaging information about Trump. The summary says Mueller found no collusion (at least, no illegal conspiring or coordinating), and that Barr, Rosenstein, and other department officials agreed there wasn’t enough evidence of obstruction of justice. But one shouldn’t be forced to give a Trump ally like Barr the benefit of the doubt; one should be able to see the evidence to determine if he faithfully expressed Mueller’s findings and hear detailed arguments as to how he and others reached a verdict on obstruction. Barr is promising a redacted version of the report will be available this month. He did not have to do this — we again simply had to hope Barr would give us more. Just as we must hope he can be pressured into giving Congress the full, unedited report. This must instead be required by law, and the public is at least owed a redacted version. Hope is unacceptable. It would also be wise to find a more independent, bipartisan or nonpartisan way to rule on obstruction if the special counsel declines to do so — perhaps done in a court of law, rather than a Trump lackey’s office.

The way of doing things now is simply a mess. What if an A.G. is untruthful in his summary? Or wants only Congress to see it, not the public? What if she declines to release a redacted version? What if the full report is never seen beyond the investigators and their Justice Department superiors, appointed supporters of the president being investigated? What if a ruling on obstruction is politically motivated?

We don’t know if the president can be subpoenaed to testify

While the Supreme Court has established that the president can be subpoenaed, or forced, to turn over materials (such as Nixon and his secret White House recordings), it hasn’t specifically ruled on whether the president must testify before Congress, a special counsel, or a grand jury if called to do so. While the president, like any other citizen, has Fifth Amendment rights (he can’t be “compelled in any criminal case to be a witness against himself,” risking self-incrimination), we do need to know if the executive can be called as a witness, and under what circumstances. Mueller chose not to subpoena Trump’s testimony because it would lead to a long legal battle. That’s what unanswered questions and constitutional crises produce.

We have yet to figure out if a sitting president can be indicted

If the executive commits a crime, can he or she be charged for it while in office? Can the president go to trial, be prosecuted, sentenced, imprisoned? We simply do not know. The Office of Legal Counsel at the Justice Department says no, but there is fierce debate over whether it’s constitutional or not, and the Supreme Court has never ruled on the matter.

There’s been much worry lately, due to Trump’s many legal perilsover this possible “constitutional crisis” arising, a crisis of our own design, having delayed creating laws for this sort of thing for centuries. For now, the trend is to follow Justice Department policy, rather helpful for a president who’s actually committed a felony. The president can avoid prosecution and punishment until leaving office or even avoid it entirely if the statute of limitations runs out before the president’s term is over!

“Don’t fret, Congress can impeach a president who seems to have committed a crime. Out of office, a trial can commence.” That is of little comfort, given the high bar for impeachment. Bitter partisanship could easily prevent the impeachment of a president, no matter how obvious or vile the misdeeds. It’s not a sure thing.

The country needs to rule on this issue, at the least eliminating statutes of limitations for presidents, at most allowing criminal proceedings to occur while the president is in office.

We don’t know if a president can self-pardon

Trump, like the blustering authoritarian he is, declared he had the “absolute right” to pardon himself. But the U.S. has not figured this out either. It’s also a matter of intense debate, without constitutional clarity or judicial precedent. A sensible society might make it clear that the executive is not above the law — he or she cannot simply commit crimes with impunity, cannot self-pardon. Instead, we must wait for a crisis to force us to decide on this issue. And, it should be emphasized, the impeachment of a president who pardoned him- or herself would not be satisfactory. Crimes warrant consequences beyond “You don’t get to be president anymore.”

Subpoenas can be optional

If you declined to show up in court after being issued a subpoena, you would be held in contempt. You’d be fined or jailed because you broke the law. It’s supposed to work a similar way when congressional committees issue subpoenas, instructing people to come testify or produce evidence. It is illegal to ignore a subpoena from Congress. Yet Trump has ordered allies like Carl Kline and Don McGahn to do just that, vowing to “fight all the subpoenas.” Leading Republican legislators like Lindsey Graham and Jim Jordan encouraged Donald Trump Jr. to ignore his subpoena. Barr waved away his subpoena to give Congress the full Mueller report. Various other officials have ignored their summonses as well.

When an individual does this, the congressional committee and then the whole house of Congress (either the Senate or the House of Representatives, not both) must vote on holding the individual in contempt.

Which means that the seriousness of a subpoena depends upon the majority party in a house of Congress. If it’s not in the interest of, say, a Republican Senate to hold a Republican official in contempt after he refused to answer a subpoena in an investigation (maybe of a Republican president), then that’s that. There is no consequence for breaking the law and ignoring the order to appear or provide evidence. As long as you’re on the side of the chamber majority, you can throw the summons from the committee in the trash. (This isn’t the case with Trump, as the Democrats control the House and are thus able to convict someone of contempt, but the utter disregard for subpoenas Trump and others showed raised the question of what happens next, revealing this absurd system to this writer and others. If a chamber does convict someone of contempt, there are a few options going forward to jail or fine said person, one of which has a similar debilitating partisan wrench.) Perhaps we should construct a system, perhaps by giving committees more control over conviction and enforcement or handing things over to the judicial system earlier, where breaking the law has consequences no matter who has majority power, to prevent that behavior and allow investigations to actually operate.

Headlines From the United States of Atheism

Alternatively titled: If You Get Why Government Favoring and Promoting Atheism or Another Faith is Unacceptable, You Get Why the Same is True For Christianity. Lest the following satire be misunderstood, let’s state this plainly. All people have the right to believe what they like, and promote it — unless you’re on the clock as a public employee or trying to use public institutions. Government needs to be neutral on matters of faith, not favoring or promoting one or any religion, nor atheism. To be neutral isn’t to back or advocate for disbelief, unlike events described in these fictional, hopefully thought-provoking, headlines. It’s simply to acknowledge that this is a country and government for all people, not just those of the Judeo-Christian tradition. Not all students, customers, or constituents are Christians or even people of faith. Freedom from religion is just as important as freedom of religion, which is why church and State are kept separate. If you wouldn’t want government used to push atheism, Islam, and so forth in any way, whether through its employees, institutions, laws, or creations, then you get it.

 

Florida Bill Requires Public Schools to Offer Elective on Atheist Classics. Why No Required Electives For Holy Books?

 

“God is Dead” to Appear on U.S. Currency Next Year

 

Christian Student Refuses to Stand For Pledge, Objecting to “One Nation, Godless” Line

 

Supreme Court Yet to Rule on Whether Refusing Service to Christians Based on Religious Belief is Discrimination

 

Why Does the Law Still Say You Can’t Be Fired For Being Gay, But Can For Being a Person of Faith?

 

Coach Lectures Players About How God is Fictional and Can’t Help Them Before Every Game

 

Little-known Last Verse of National Anthem Reads: “And Faith is a Bust” 

 

Believers Bewildered as to Why Students Are Studying Science and Evolution in Religion Class  

 

Sam Harris One of Six Selected to Lead Traditional Refutations of God’s Existence at Presidential Inauguration 

 

Lawmakers Want “The God Delusion” as This State’s Official Book 

 

Christians Fight to be Allowed to Give Invocations at the Legislature Too; Many Americans Wonder Why Invocations Are Necessary

 

Bonus: Headlines From the United States of Allah

Oklahoma Legislature Votes to Install Qu’ran Monument on Capitol Grounds 

 

States Are Requiring or Allowing “Allahu Akbar” to be Displayed in Public Schools

 

Muslims Object to Removal of Big Crescent and Star From Public Park

 

With U.S. Supreme Court Oblivious, Alabama Ditches Rule That Death Row Inmates Can Only Have Imam With Them at the End

The Odd Language of the Left

Language fascinates me. This applies to the study of foreign languages and the pursuit of a proper, ideal form of one’s native language (such as the preservation of the Oxford comma to stave off chaos and confusion), but most importantly to how language is used for political and social issues — what words are chosen, what words are ethical (and in what contexts), how the definitions of words and concepts change over time, and so on.

These questions are important, because words matter. They can harm others, meaning they can be, at times, immoral to use. Individuals and groups using different definitions can impede meaningful conversation and knowledge or perspective sharing, to such a degree that, in cases where no definition is really any more moral than another, long arguments over them probably aren’t worth it.

Despite incessant right-wing whining about political correctness, the Left is doing an important service in changing our cultural language. It’s driven by thinking about and caring for other people, seeking equality and inclusion in all things, which could theoretically be embraced by anyone, even those on the other side of the political spectrum who don’t agree with this or that liberal policy, or even understand or know people who are different. “Immigrants” is more humanizing than “aliens” or “illegals,” “Latinx” does away with the patriarchal, unnecessary male demarcation of groups containing both men and women (and invites in non-binary persons), and “the trans men” or simply “the men” is far more respectful than “the trangenders,” in the same way that there are much better ways of saying “the blacks.” There are of course more awful ways of talking about others, virulent hate speech and slurs; more people agree these things are unacceptable. As far as these less insidious word choices go, replacement is, in my view, right and understandable. Why not? Kind people seek ways to show more kindness, despite tradition.

What I find curious is when the Left begins questioning the “existence” of certain concepts. Finding better phrasing or definitions is often important and noble, but for years I’ve found the claims that such-and-such “does not exist” to be somewhat strange.

Take, for instance, “The friendzone does not exist.” This is the title of articles on BuzzfeedThought Catalog, and so forth, which the reader should check out to fully appreciate the perspective (of those rather unlike this writer, an admittedly privileged and largely unaffected person). It’s easy to see why one would want to wipe friendzone off the face of the Earth, as it’s often uttered by petulant straight men whining and enraged over being turned down. The rage, as noted in the articles, is the mark of entitled men feeling they are owed something (attention, a date, sex), wanting to make women feel guilty, believing they are victims, and other aspects of toxic masculinity. Such attitudes and anger lead to everything from the most sickening diatribes to the rape and murder of women. It’s a big part of why the feminist movement is important today.

Yet friendzone is a term used by others as well — it’s surely mostly used by men, but it’s impossible to know for certain if it’s disproportionately used by men of the toxic sort. If you’ll pardon anecdotal evidence, we’ve probably all heard it used by harmless people with some frequency. We’d need some serious research to find out. In any case, many human beings will at some point have someone say to them: “I don’t feel that way about you, let’s just be friends.” A silly term at some point arose (perhaps in Friends, “The One With the Blackout,” 1994) to describe the experience of rejection. What does it mean, then, to say “The friendzone does not exist”? It’s basically to say an experience doesn’t exist. That experience can be handled very differently, from “OK, I understand” to homicide, but it’s a happenstance that most people go through, so some kind of word for it was probably inevitable. If it wasn’t friendzone it likely would have been something else, and one suspects that if we eradicate this particular term a new one might eventually pop up in its place (justfriended?). It’s all a bit like saying “Cloud Nine does not exist” or “Cuffing season does not exist.” Well, those are expressions that describe real-world experiences. As long as a human experience persists, so will the concept and some kind of label or idiom, often more than one.

The relevant question is if the use of the term friendzone encourages and perpetuates toxic masculinity. Is it contributing to male rage? Does one more term for rejection, alongside many others (shot down, for instance), have that power? Or is it a harmless expression, at times wielded by awful men like anyone else? That’s a difficult question to answer. (The only earnest way would be through scientific study, the basis of many left-wing views.) While I could be wrong, I lean towards the latter. I don’t suppose it’s any more harmful or unkind than shot down and so forth, and see such terms as inevitable, meaning what’s really important is changing the reactions to certain life events. My guess is the word is experiencing a bit of guilt by association — terrible men use it while expressing their childish sentiments about how they deserve this or that, about how women somehow hate nice guys, and so on, and thus the term takes on an ugly connotation to some people. Other terms are used by them less and don’t have that connotation. Readers will disagree on how strong the connotation is, and how harmful the term is, but the main point was simply to ponder how a word for a common experience should be said to “not exist” — it’s hard to discern whether such phrasing intrudes more on one’s knowledge of reality or English. Perhaps both equally. It’s rather different than saying, “This word is problematic, here’s a better one.” I could be misinterpreting all this, and every instance of denying existence is supposed to mean the word simply shouldn’t be used, leaving space for other, better ways to describe the concept, but that just goes back to interest in how language is used in social issues — why say one but not the other, more clear, option? Anyway, read the articles and you’ll likely agree the very existence of concepts are being questioned. Finally, it’s interesting to consider why the Left ended up saying X doesn’t exist rather than, say, X is real and your toxic ass had better get used to it. What if, like words of the past, it had been adopted by those it was used against to strip it of its power and turn the tables? What causes that to happen to some words but not others? Is it because this one describes an event, not a person? Another intriguing question about language.

Similarly, does virginity exist? Not according to some (The OdysseyHer Campus). Again, the sentiment is understandable. Women’s worth has long been closely tied to virginity (read your bible), and with that came widespread oppressive efforts to keep women’s bodies under tight control, still manifested today in incessant shaming for engaging in sex as freely as men do, murder, and more. Men have experienced something related, though far less oppressive and in an opposite sense: women are more valuable as virgins (or with fewer overall partners) and are judged for being sexually active, while men are shamed or ridiculed for being virgins or not engaging in sex. Further, the definition of virginity is open to debate (the definition of friendzone is as well, though the most common one was used above). Is a straight person a virgin if he or she has only had anal sex? Is a gay person, who has regular sex with a partner, technically a virgin until death? Because the word’s meaning is subjective, and because it was a basis of patriarchal oppression, so the argument goes, “virginity doesn’t exist.”

Virginity is a way of saying one hasn’t had some form of sexual experience. For some it’s vaginal penetration, for others it’s different — the particular act doesn’t really matter. It’s simply “I haven’t had sex yet,” whatever form sex may take in the individual mind. Everyone has their own view of it, but that doesn’t make it unreal — in the same way everyone has their own idea of what love is, and yet love exists. Having sex for the first time is quite an event in any human being’s life, and most or many will experience it. Even if our history had been free of misogyny and patriarchy, there likely would have eventually arisen some term for having never experienced sex (or having been turned down). Does the statement “Virginity doesn’t exist” make sense? As with friendzone, it’s a labeled common experience, or lack thereof. While it was and is wielded by misogynistic oppressors, it’s an occurrence, and a concept, that certainly “exists.”

Does having a term for all this harm society and hurt others, helping preserve the hysteria over who’s had intercourse, and the associated maltreatment? Again, it’s possible. But my point is that a term is unavoidable. The state of being is real, thus the concept is real, thus a word or phrase will inevitably be employed. Being “single” happens — does “singleness” not exist? Won’t there always be some way to describe that state? We could get rid of the words virgin and virginity, but there’s no getting rid of “I’ve had sex” versus “I haven’t.” Another phrase or term will suffice just as well to describe the concept. We can abolish friendzone, but “The person I like turned me down” isn’t going away. There may be better words and definitions for concepts, but there’s often no case against a concept’s reality, which is how all this is framed. What’s important is to try to change the perceptions and attitudes toward these concepts, not deny they exist. “Yes, you were put in the friendzone, but you’ve done that to a lot of women you weren’t interested in. That’s life, you’ll live, grow up.” “So what if she’s not a virgin? Should your dateability or worth go down if you weren’t one? Why hers and not yours?” And so on. Indeed, it seems more difficult to change attitudes towards life events when you start off by saying, in essence, and confusingly, that an expression isn’t real.

There are other examples of assertions I find awkward, but as this article is lengthy already I will just briefly mention a couple of them and hope the reader assumes I’ve given them more thought than a few sentences would suggest. “There’s no such thing as race, it’s a social construct,” while doing a service by reminding us we are all part of the same human family, has always seemed mostly pointless in a reality where individuals biologically have different shades of skin and hair texture, and many are brutally victimized because of it. “No human being is illegal” puts forward an ideal, which I support: amnesty, a faster legal entrance policy, and so on (I also support the dissolution of all borders worldwide and the establishment of one human nation, but that may not be implied here). It’s also a call to describe people in a more respectful way, i.e. “undocumented” rather than “illegal.” Still, it always seemed a little off. Some human beings are here illegally, and our task is to change the law to make that history. That the State designates some human beings as illegal is the whole problem, the entire point. True, it’s an ideal, an inspirational call. But I always thought replacing “is” with “should be” or something would be more to the point. But enough splitting hairs.

Someone Worse Than Trump is Coming. Much of the Right Will Vote For Him Too.

Donald Trump is a nightmare — an immoral, vile, ignorant human being.

It is impossible to fully document his awfulness with brevity. Even when summarizing the worst things Trump has said and done it is difficult to know where to stop.

He calls women “dogs” — they are “animals,” “big, fat pigs,” “ugly,” and “disgusting” if they cross him or don’t please his gaze. You have to “treat ’em like shit,” they’re “horsefaces.” He makes inappropriate sexual jokes and remarks about his own daughter, about “grabbing” women “by the pussy” and kissing them without “waiting,” and admits to barging into pageant dressing rooms full of teenage girls with “no clothes” on. He mocks people with disabilities, Asians with imperfect English (including, probably, “the Japs“), and prisoners of warTrump was sued for not renting to blacks, took it upon himself to buy full-page ads in New York papers calling for the restoration of the death penalty so we could kill black teens who allegedly raped a white woman (they were later declared innocent), and was a leader of the ludicrous “birther” movement that sought to prove Obama was an African national. He is reluctant to criticize Klansmen and neo-Nazis, and retweets racist misinformation without apology. He’s fine with protesters being “roughed up,” nostalgic about the good old days when they’d be “carried out on a stretcher,” even saying about one: “I’d like to punch him in the face.” He likewise makes light of physical attacks on journalists. He praises dictators. He threatens to violate the Constitution as a political strategy. He cheats on his wife with porn stars and pays them to keep quiet. The constant bragging of a high I.Q. (his “very, very large brain“) and his big fortune, among other things, are emblematic of his ugly narcissism. His daily rate of lies and inaccuracies is surely historic, with journeys into fantasyland over crowd sizes and wiretaps by former presidents.

And those are merely the uncontroversial facts. Trump faces nearly two dozen accusations of sexual assault. He is alleged to at times say extremely racist things, remarks about “lazy,” thieving “niggers.” His ex-wife claimed in 1990 that he sometimes read Hitler’s speeches, and Vanity Fair reported Trump confirmed this. The payment to Stormy Daniels was likely a violation of campaign finance laws — Trump’s former attorney implicated him in court. Trump is being sued for using the presidency to earn income, his nonprofit foundation being sued for illegal use of funds. Trump has almost certainly engaged in tax fraud, joined in his staff and own son’s collusion with Russia during the 2016 election, and obstructed justice.

All this of course speaks more to his abysmal personality and character than his political beliefs or actions as executive. That’s it’s own conversation, and it’s an important one because some conservatives accept Trump is not a good person but think his policies are just wonderful.

On the one hand, many of Trump’s policies are as awful as he is, and will not be judged kindly by history. Launching idiotic trade wars where he slaps a nation with tariffs and is immediately slapped with tariffs in return, hurting U.S. workersStoking nativist fear and stereotypes about Hispanic immigrants and Muslims, driving the enactment of (1) a ban on immigrants from several predominantly Muslim nations (doing away with vetting entirely, keeping good people, many fleeing oppression, war, and starvation, out with the bad) and limits to refugees and immigrants in general, and (2) the attempted destruction of DACA (breaking a promise the nation made to youths brought here illegally) and a “zero tolerance” policy on illegal entry that sharply increased family separations. Saying foreigners at the border who throw rocks at the military should be shot. Pushing to ensure employers are allowed to fire people for being gay or trans (and refuse them service as customers), eliminating anti-discrimination protections for trans students in public schools, and attempting to bar trans persons from military service. Voting against a U.N. resolution condemning the execution of gays.

On the other hand, we can be grateful that, to quote American intellectual Noam Chomsky, “Trump’s only ideology is ‘me.'” Trump is thoroughly defined by self-absorption. He flip-flops frequently — reportedly most influenced by the last person he speaks to — and even used to call himself more of a Democrat, advocating for a few liberal social policies while remaining conservative on business matters. He either changed his mind over time or, as I wrote elsewhere, believed running as a Republican offered the best chance at victory and thus adopted an extreme right-wing persona — an idea that doesn’t at all mean he isn’t also an awful person (rather, it’s evidence of the fact). Outside of policies that serve him personally it is difficult to know what Trump believes in — or if he even cares. He may genuinely lack empathy and have no interest in policies that don’t affect him. True, perhaps he isn’t merely playing to his base and actually has a vision for the country, but the “ideology of me” does appear preeminent. While it’s “deeply authoritarian and very dangerous,” as Chomsky says, it “isn’t Hitler or Mussolini.” And for this we can count ourselves somewhat fortunate. (Likewise, that Trump isn’t the brightest bulb in the box, speaking at a fourth-grade level, reportedly not reading that well and possessing a short attention span, lacking political knowledge, and being labeled a childish idiot by his allies.)

Next time we may not be so lucky. As hard or painful as it is to imagine, someone worse will likely come along soon enough.

One day Trump will leave the White House, and with a profound sense of relief we will hear someone declare: “Our long national nightmare is over.” That’s what Gerald Ford said to the country the day he took over from Nixon — a man corrupt, deceitful, paranoid, wrathful, and in many ways wicked (he is on audiotape saying “Great. Oh, that’s so wonderful. That’s good” when told his aides hired goons to break protesters’ legs). One wonders how many people in 1974 thought that someone like Trump would be along in just eight presidencies? If there was a lack of imagination we shouldn’t repeat it.

In significant ways, there are already foreshadows of the next nightmare. Trump opened a door. His success was inspiration for America’s worst monsters. They have seen what’s possible — and will only be more encouraged if Trump is reelected or goes unpunished for wrongdoing and nastiness. I wrote before the election:

When neo-Nazi leaders start calling your chosen candidate “glorious leader,” an “ultimate savior” who will “Make American White Again” and represents “a real opportunity for people like white nationalists,” it may be time to rethink the Trump phenomenon. When former KKK leader David Duke says he supports Trump “100 percent” and that people who voted for Trump will “of course” also vote for Duke to help in “preserving this country and the heritage of this country,” it is probably time to be honest about the characteristics and fears of many of the people willing to vote for Trump. As Mother Jones documents, white nationalist author Kevin McDonald called Trump’s movement a “revolution to restore White America,” the anti-Semitic Occidental Observer said Trump is “saying what White Americans have been actually thinking for a very long time,” and white nationalist writer Jared Taylor said Trump is “talking about policies that would slow the dispossession of whites. That is something that is very important to me and to all racially conscious white people.” Rachel Pendergraft, a KKK organizer, said, “The success of the Trump campaign just proves that our views resonate with millions. They may not be ready for the Ku Klux Klan yet, but as anti-white hatred escalates, they will.” She said Trump’s campaign has increased party membership. Other endorsements from the most influential white supremacists are not difficult to find.

It wasn’t all talk. Extreme racists got to work.

  • In 2016, David Duke of KKK fame, who was once elected to the Louisiana state house, came in seventh out of 24 candidates in a run-off election for U.S. Senate. He earned 3% of the vote; about 59,000 ballots were cast for him.
  • In August 2018, Paul Nehlen, an openly “pro-White” candidate too racist for most social media platforms, garnered 11% of the vote in the GOP primary for Wisconsin’s 1st District (U.S. House of Representatives). He lost, but beat three other candidates.
  • John Fitzgerald, a vicious anti-Semite who ran for U.S. House of Representatives, beat a Democratic and independent candidate in California District 11’s open primary, coming in second with 23% of the vote. 36,000 people chose him. On November 6 he lost with 28% of the vote (43,000 votes).
  • A Nazi named Arthur Jones was the Republican nominee for U.S. House of Representatives from Illinois’ 3rd District (though he was the only person who ran as a Republican candidate, becoming the nominee by default). He just got 26% of the vote — 56,000 supporters.
  • Seth Grossman, who believes black people to be inferior, was the GOP nominee for U.S. House of Representatives from New Jersey’s 2nd District. He beat three other rivals, with 39% of the vote. He just garnered 46% of the vote in the general election. That’s 110,000 voters, just 15,000 short of the victor.
  • Russell Walker, who espouses the superiority of the white race, ran for District 48 in the North Carolina state house. He won the GOP primary in May, beating his rival with 65% of the vote. On November 6 he earned 37% of the vote in his race.
  • Steve West spreads conspiracy theories about the Jews, even saying “Hitler was right” about their influence in Germany. He won nearly 50% of the vote in the GOP primary for Missouri state house District 15, beating three others. On November 6 he also received 37% of the vote against his Democratic opponent.
  • Steve King has served in the U.S. House of Representatives since 2003. Hailing from Iowa’s 4th District, he said whites contributed more to civilization than people of color and constantly bemoans the threat that changing demographics represents to our culture. He also endorses white nationalists because they are “Pro Western Civilization” and spends time with groups founded and led by Nazis. He won 75% of the vote in the GOP primary — 28,000 votes. Then he got 50% in the general election (157,000 votes), keeping his seat.

There were others, of course, more subtle in their bigotry — more like Trump. Overall, there was a “record breaking” number of white supremacist candidates running for office this year. In most of the cases above, America couldn’t even keep such candidates in the single digits. Many beat more normal, tolerant candidates.

Those numbers may not seem all that impressive, not high enough to warrant any fears over a more horrific candidate winning the GOP presidential nomination. But it does not always take much. Turnout for the primaries is so low only 9% of Americans chose Trump and Hillary as party nominees. More voted for others, but that’s all it took. Trump won the nomination with 13 million votes, with 16 million Republican voters choosing someone else (both record numbers). He thus won 45% of the primary votes, which is about what Mitt Romney (52%) and John McCain (47%) accomplished. In other words, it would take less than half of Republican voters in the primaries to usher a more extreme racist (or sexist or criminal or what have you) to the Republican nomination. After seeing what many conservative voters could ignore or zealously embrace about Trump, this does not seem so impossible these days. Many Trump supporters, in a tidal wave of surveys and studies, were shown to have extremely bigoted and absurd views. From there, it isn’t that hard to envision a similar situation many conservatives faced in 2016, where they voted for an awful person they disliked to continue advancing conservative policies and principles. You have to stop abortion and the gays, you have to pack the Supreme Court, and so on. Some, to their immense credit, refused to do this — not voting, voting third party, or even voting for Clinton. But of course they were a minority. (And no, if you also believe absurd things, Democrats and liberals did not swing the election for Trump.)

The day of the election I felt more confident of Clinton’s victory than I had a couple weeks before. Previously, I had predicted that Trump was “probably” going to win. Perhaps it was a foolish optimism that washed over me on election day, when I expressed that Clinton would somehow eke out a narrow victory. I — and everyone else — should have known better. The tendency of the two parties to trade the White House every eight years, Clinton’s unpopularity on the Left, Trump as a reaction to the country’s first black president, the threat of the Electoral College handing the White House to another Republican with fewer votes…all sorts of factors should have made this an easy election to predict. Perhaps many of us simply did not want to face reality, did not want to believe we lived in a country where someone so awful could win, where so many voters are just like him or simply don’t care enough about his awfulness to refuse to vote for him. But after the shock and horror at Trump’s triumph abated, I could not shake the dread that this was merely the opening salvo in a battle against increasingly dangerous, extremist candidates.

Let’s hope, whether he — and it will certainly be a straight white male, given the extremist base — comes along in mere years or many decades, that we will not make the same mistake. Whether he will win is of course impossible to say. It will depend on how passionately we protest, how obsessively we organize, how voluminously we vote.

But Abortion!

There exists a particularly obnoxious set of visuals and memes produced by both conservative and less sophisticated liberal social media pages (looking at you, Occupy Democrats). They have to do with hypocrisy, and often revolve around abortion.

An example from the Left reads: “Only in America can you be pro-death penalty, pro-war, pro-nuclear weapons, pro-guns, pro-torture, anti-health care, and anti-food stamps and still call yourself ‘pro-life.'”

One from the Right goes: “Oh I get it now… The death penalty is bad, abortion is good.”

The implication or accusation of hypocrisy appears in conversation as well. Often when I post or write something critical of some horrible thing it’s only a matter of time before a conservative friend or acquaintance drops by with the tired “Yet you support abortion rights, what a hypocrite.” There is a good chance if you’re reading this right now it is because you just said something along those lines, as my quest to one day be able to reply in article form to any political comment or question, to save vast amounts of time, continues.

The problem with such accusations of hypocrisy is that they are so easily reversed. Well, well, well, you’re pro-life yet not a pacifist — what we’ve got here is a hypocrite! Why, you’re a pacifist yet somehow pro-choice — at least be morally consistent! 

Typically, when someone comes along guns blazing in this fashion, they’re employing the whataboutism fallacy. It’s distracting from or even discrediting whatever was originally posited by accusing someone of hypocrisy. So perhaps I post about how I think we shouldn’t conduct drone bombings in the Middle East and Africa because they kill far more innocent civilians than actual targets. When the inevitable “but abortion!” comes, there is usually no agreement concerning the immorality of the original issue addressed. Sometimes there is, but usually the individual only provides it later (when pressed), after the implied or explicit accusation of hypocrisy. The individual isn’t much interested in discussing whether the original issue is or isn’t moral. He or she wants to discuss abortion and make sure you know you’re two-faced. In turn, I try to keep things on-topic (and celebrate agreements where we find them), a debate preference that seems to annoy some people to no end. I often say that each issue, each moral question, needs to be weighed on its own merits. People don’t often grasp right away that this belief is connected to whether or not someone is actually a hypocrite, and I don’t explain it because that would further derail the conversation away from whatever the original topic was. As a remedy, I’ll briefly explain my thoughts here.

Say you’re a conservative and you’ve posted about how killing babies in the womb is wrong. Here I come with “But you support our War on Terror, which kills countless pregnant women and other innocent human beings. Hypocritical much?” If you’re like me, you’d be somewhat annoyed at this distraction from the cause you were trying to advocate for, or perhaps you’re unlike me and don’t mind taking whatever detour someone wants to go on. Regardless, you likely think and believe something along the lines of: These things are not the same. They’re a bit different, they have slightly different contexts — even if they both result in similar tragedies. You’re probably counting the ways in which they’re distinct or shouldn’t be compared right now.

In thinking so, you are essentially acknowledging that each moral question should be weighed on its own merits. Unless you actually think you’re a hypocrite, you believe these are slightly different situations and therefore different stances concerning them may be morally justified.

And you would of course be correct. These situations — torture, war, the death penalty, abortion, homicide, unregulated gun ownership, free market healthcare, and on and on — are unique, and have very different questions you have to answer before you can make a decision on whether they’re ethical. You have to work through unique factors.

Many of the most deeply conservative and fervently religious people believe abortion is never morally permissible under any circumstance, while others (conservatives and liberals, religious persons and nonreligious persons, etc.) believe there are some or many instances where it is. The purpose of this article isn’t to argue one way or the other, which I have done elsewhere. No matter what you think about abortion, I hope to simply demonstrate that people across the political spectrum are a tad too quick to use the h-word. So what are some standard questions about abortion that make folks think differently?

  • Was the pregnancy the result of rape?
  • Does birth endanger the life of the mother?
  • Should the government force you to give birth against your will?
  • Is it less moral to commit abortion as the pregnancy goes on? Does the age of the fetus matter?
  • Does the fact that women seek unsafe black market abortions, resulting in health complications or death, in societies where abortion is illegal change the moral equation at all?

Those are important questions to think about and answer when deciding whether or when abortion is morally permissible, and each person will answer differently.

But the relevant question here is: Do we also need to ask those questions when we ponder the morality of war?

Not really. Those questions aren’t going to be very helpful when deciding whether massacring civilians while dropping bombs to kill terrorist suspects overseas is the right thing to do. The questions concerning war won’t sound like the questions concerning abortion, and vice versa. Each issue, each situation, has its own array of unique questions to consider. They’re truly dissimilar contexts. This is why accusations of hypocrisy like we saw above don’t make a lot of sense.

In fact, such accusations of hypocrisy are so easily reversed because they don’t really have much to do with hypocrisy at all. It’s a bit like saying it’s hypocritical to think killing someone in cold blood is wrong but killing someone in self-defense is not. It’s the same result, right? In either case someone is killed. You hypocrite! Well, no, these are different circumstances with different moral questions and answers. Real hypocrisy has more to do with situations that are essentially the same. If I curse like a sailor but lambaste others for cursing, that’s hypocrisy. If you think women should be forced to give birth regardless of circumstance but wouldn’t think the same for men if they could get pregnant, that’s hypocrisy. If you’re Mitch McConnell, that’s hypocrisy. And so on. It has to do with holding yourself to different standards than you hold others in the same situation, which is pretty disingenuous (the word actually derives from the Greek word ὑπόκρισις [hypókrisis], meaning play-acting or deceit). But in different situations you have unique things to figure out and may therefore end up with different moral answers. Even a close analog to abortion, infanticide (more universally opposed, yet not without exception, as with the infant in constant agony from an incurable illness), has a difference people have to mull over, namely that the baby has not yet been born. One can think both are wrong, that the difference is insignificant, but the fact remains it is a literal difference — the situations aren’t identical. They’re much closer than other comparisons, true, but there is a difference that is more significant to some than others. That’s my point. So you have to ask different questions and decide for yourself if different scenarios have the same moral conclusions; they may, but when they do not it isn’t necessarily hypocrisy, simply because the scenarios were not indistinguishable.

(This isn’t the only context in which “hypocrisy” isn’t really used correctly. I once thought of writing an article entitled No One Knows What Hypocrisy Means after I was called a hypocrite for frequently criticizing white attacks against innocent people of color but rarely — though not never — doing the same for the reverse. But one is an exponentially bigger societal problem than the other. I didn’t posit that one is the wrong thing to do and the other the right thing to do; it simply makes sense to focus most of our attention and energies on more prevalent problems.)

The conservative can say to the liberal, “You’re a hypocrite for being a pacifist yet pro-choice,” but why bother? The liberal can simply respond, “And you’re a hypocrite for being pro-life yet pro-war.” Stalemate. Are we all hypocrites then? I would posit, instead, that none of us are. I personally don’t believe a conservative who is pro-life yet pro-war is a hypocrite (if I did, we know what that would be an example of). This is because I know these issues are not the same, that the conservative has different reasoning for and answers to unique moral questions that could result in divergent conclusions between scenarios. I may not agree with that reasoning or those answers one iota, but I understand them and how they may not lead to the same place.

Some Things Are Worse Than Other Things: the Philosophy of False Equivalence

Imagine, if you will, six scenarios:

  • A Nazi punches a man walking down the street because he is a Jew; a Jew punches a man walking down the street because he is a Nazi.
  • A woman says to another “You’re the problem with America. Get out of this country, fucking bitch” because she is Hispanic; a woman says to another “You’re the problem with America. Get out of this country, fucking bitch” because she is unabashedly racist.
  • A restaurant owner refuses to serve a man because he is gay; a restaurant owner refuses to serve a man because he despises gay people.

The mind’s first temptation may be to construct creative contexts, but there are no ambiguities here. The Nazi is not just an ultraconservative; he believes in Nazism and wears the swastika. The Hispanic woman is a citizen born in Idaho and the racist woman knows it; the racist woman is not merely concerned with how unfair illegal entry is to those waiting their turn or that illegal immigrants are “stealing jobs,” but rather she does not like Hispanics — living in the same neighborhood as they, working with them, hearing Spanish, and so forth. The first restaurant owner and the second man denied service both go way beyond trust in biblical teachings about how homosexuality is an abominable sin — it disgusts them beyond words, they believe it should be a crime as it once was, they don’t value the life of a gay person equal to that of “normal” straight person. These being hypothetical scenarios of my own creation, there are no excuses nor saving grace available.

The question explored here isn’t which of these things are wrong and which are right. People have different ideas concerning when violence, extreme disrespect, or denial of service is acceptable, if ever. Sorting through all that, making a case one way or another, is not the point. Let’s proceed from the standpoint that all of these things are morally wrong. That is, after all, the typical premise of someone presenting a moral equivalence relevant to this discussion. The premise is: a racist attack is morally wrong and an attack against a racist is morally wrong. The moral equivalence is: an attack against a racist is as morally wrong as a racist attack.

Is it?

Are the scenarios above and their inverses truly equal in their “wrongness”? Or can two things be wrong, but one slightly less wrong?

Today, this debate arises constantly. We have open Nazis walking around the mall and white supremacists attacking or murdering people of color, unhinged riders unleashing racist rants on the bus, with medical institutions refusing to treat LGBT Americans and pastors wishing more gay people had died in the Orlando massacre. We also have Antifa and others sucker-punching Nazis and advocating we “Kill Nazis,” a gunman killing Republicans, business owners kicking out Trump supporters — and people attacking them physically or verbally. Opposing protesters brawl in the streets.

To reiterate, all of these things could be called morally wrong. After all, they do harm to others. But here we need to add an important point: to say a scenario is more morally wrong than its inverse is not to advocate for either. To conclude, for instance, that denying service to a bigot is less morally egregious than denying service to a gay person isn’t to automatically or necessarily advocate for denying service to bigots. One can still oppose both because he or she has determined they are both on the spectrum of immorality, even if at different points. Likewise, to say that some things are worse than other things, to believe a scenario worse than its inverse, is not to say this is always true for any other scenario and its inverse. As we will see, where motives are more equal the immorality of actions are more equal.

Turning back to our hypothetical situations and whether they involve false equivalences, we first have to agree upon the principle that some actions can indeed be morally worse than others — that a spectrum of morality makes sense. This shouldn’t even have to be argued, but there may be some religious fundamentalists or others who posit all “sin” is equally wrong. So lying about your age is just as wrong as rape. This sort of black-and-white thinking isn’t something most people, including people of faith, take seriously, so we won’t spend much time on it. (And we’ve already seen how morality is opinion-based even if God exists; see Where Does Morality Come From?The Philosophy of MoralityYes, Liberals and Atheists Believe in Absolute TruthIs Relative Morality More Dangerous Than Objective Morality?) Most people would conclude stealing money from a man’s wallet is not as wrong as killing him, and so forth. So some wrongs are more wrong than other wrongs.

Then we need to recognize that the same action, doing the same harm, can be less wrong — even morally right — if done for certain reasons. Ethics are situational. Motives matter. Again, most everyone accepts this. Take an action like killing. Killing a man because you want his wife or because he looked at you the wrong way is a bit different than killing in self-defense or in war. Those last two situations are often regarded as morally right, though there’s plenty of debate about it. That doesn’t matter — what matters is that the underlying principle is agreed upon: the same act will have a different moral status depending on why someone does it. A spectrum is easy enough to envision. Perhaps killing someone in self-defense is less wrong than killing someone in war, which is perhaps less wrong than killing someone because he or she used the “white” restroom, etc. Use your imagination.

If motives matter regarding the morality of some actions, might they for others?

The actions of our scenarios are the same, but the motives are not — which may alter the morality of the action.

Think of the possible motives, the driving forces, of the Nazi, the racist woman, the bigoted owner. What comes to mind? Conspiracy theories about the inferior Jews ruling and ruining the nation, discomfort with a country growing less white, preferring gays scared back into the closet — out of sight, out of mind. Whatever you envision, it likely isn’t good. It isn’t something you find morally right. And what of the possible motives of the Jew, the Hispanic woman, the gay man? Opposition to Nazi ideology, racism, and discrimination come to mind. These are likely stances you agree with and find morally right, even if you don’t approve of the action that followed.

How is it, then, that anyone can say these scenarios and their inverses are equally immoral? How are two identical actions equally wrong despite one having more moral motives and the other more immoral motives? This is like saying that killing in self-defense is just as bad as killing someone for looking at you the wrong way. It is saying that motives do not matter.

But most people believe they do. Why the double standard? Does it involve the severity of the action? Why do motives affect the morality of a more serious action like killing but not a less serious one like a punch, name-call, or refusal to serve? There is no logical reason that I can see. Lying is a less serious action, but we all understand that lying about someone raping you would be worse than lying about how late you were past curfew.

Again, there may be situations where X is as equally wrong as Y, but it seems like that would require motives that are more equally wrong. Lying to your spouse about losing the dog is roughly as wrong as lying to your spouse about spending vacation money on a new television. Killing over jealousy is about as wrong as killing over insults. But the motives of our situational pairs are much farther apart, polar opposites in fact. (One may insist they are the same because each attacker wants to exert power over the other, put him in his place, seize control, do what’s best for herself, express hate, intimidate, hurt, and so on, but that only takes one temporary step backward. Why are they doing those things? What are the motives behind those motives? Can all hatred be equally wrong — say, racist hatred versus hatred of a racist — if the motives are ethical polar opposites? Aren’t the motives morally different, even if you frown upon where they lead? Of course they are, as we saw above.)

(Now, folks will disagree over what motives are moral, but for each person there will always be an array of motives that include some more moral and some less. If you’re a Nazi sympathizer, you’ll think racist motives more right and opposition motives more wrong, and apply the same to the actions — but no one in his or her right mind can hold both racism and anti-racism as equally moral or immoral! Therefore the logical argument in this piece, finalized below, applies to everyone who accepts the premises with which we began, that not all sins are equally wrong and that the same action can have a different moral flavor dependent upon motives.)

Is the double standard topic-based? If our near-universal way of thinking about ethics involves an action having a changed moral character following a changed motive, there has to be some kind of justification for not applying this to matters of bigotry. I cannot think of any such justification. What possible reason could there be to exclude this topic, to create a new, special standard that doesn’t apply to anything else? None exists. (Imagine excluding matters of war — what could possibly justify doing that?) A racist attack therefore must be morally worse than an attack against a racist. (Or, if you’re a racist or one of their sympathizers with different views on the motives, as discussed above, it must be morally better! They cannot be equal.) Some may say it’s radically worse, others just slightly, but based on our premise of ethics it must be worse (or better, for you Nazis) to some degree — it’s a logical necessity. If they were equally wrong, we’d have to throw motives out the window, and there would be no reason to stop at matters of bigotry (just as there’s no reason to exclude it). Self-defense would be just as wrong as cold-blooded murder based on that new premise. Lying to save an innocent life would be just as wrong as lying to end one. And so on. With no justification existing to exclude actions related to a certain topic, one must hold all actions to the same standard — either motives matter or they do not. (Same for hatred and so forth.) Again, that’s what’s logically sound for each person regardless of his or her unique views on what’s ethical: you can’t logically think two identical actions equally wrong if you also think one motive is more moral than the other (which you will if in your right mind). If you think motives matter for other moral questions, that’s simply what makes logical sense.

If it’s still difficult to see our scenarios as false equivalences, it may help to consider others, perhaps from other time periods, where gaps between “wrongness” seem bigger, more obvious. The way humans observe history is always less morally confused than the way we observe the present. Hindsight and all. Note these also could unwisely be labeled identical attempts to exert power over someone, hurt someone, lash out in hate, and so on:

  • Would a slave killing his master be as wrong as a master killing his slave? Isn’t one about liberation, the other subjugation?
  • Would a rich woman stealing from a poor woman be just as wrong as the reverse? Might one motive be greed, the other need?
  • Were the Allies just as wrong to invade France in 1944 as Germany was a few years earlier? Is there any side in any war less wrong than another?

Motives matter, always. That is why some things are worse than other things.

As a last word, while I don’t believe this fact affects the logic, it’s important to note that in our scenarios, and real-world ones that spark the equivalence debate (one truly wonders why it’s difficult to see that the alt-right, full of people who advocate a “White Ethno-State,” is generally evil, whereas Antifa, full of people who advocate standing against “racist and fascist bigots” is generally not), attacks against bigotry are a reaction to bigotry. Bigotry comes first; the only “reaction” it entails is one against who people are: their ethnicity, sexuality, gender, etc. Reduce bigotry and there will be fewer reactions; but reduce reactions and bigotry will crush people per usual. Again, this isn’t to necessarily advocate for violent or hurtful reactions. It’s simply to recognize the worse problem, the root problem — and focus our energies on obliterating it in ways ethically acceptable to each of us personally.

Is There Any Actual Science in the Bible?

Someone once told me that the bible was the greatest work of science ever written. This is mildly insane, as anyone who’s read the bible knows there is more scientific knowledge presented in any grade school or even children’s science book. (And, given thousands of extra years of research, it’s probably more accurate.) The purpose of the bible, secularists and believers can surely agree, was not to acknowledge or pass down scientific principles. Finding incredible scientific truths in the text typically requires very loose interpretations. But as religious folk sometimes point to science in the bible as proof of its divine nature, it seems necessary to critically examine these claims.

In making the case that “every claim [the bible] makes about science is not only true but crucial for filling in the blanks of our understanding about the origin of the universe, the earth, fossils, life, and human beings,” Answers in Genesis points to verses that vary in ambiguity. Meaning some are more believable than others as to whether they could present valid scientific information.

Take Job 26:7, in which it is said God “spreads out the northern skies over empty space; he suspends the earth over nothing.” One may wonder what it means to spread skies over empty space. Perhaps it’s referencing the expanding universe, as others think verses like Job 9:8 reference (“He alone spreads out the heavens”). But the second part matches well what we know today, that the globe isn’t sitting on the back of a turtle or something. Why this and other verses may not be as incredible as supposed is discussed below.

(It’s often asserted also that the Big Bang proves the bible right in its writing of a “beginning,” but we simply do not know for certain that no existence “existed” before the Big Bang.)

Answers in Genesis also believes the bible describes the water cycle. “All streams flow into the sea, yet the sea is never full. To the place the streams come from, there they return again,” reads Ecclesiastes 1:7. It also provides Isaiah 55:10: “The rain and the snow come down from heaven, and do not return to it without watering the earth and making it bud and flourish…” Some translations (such as NLT, ESV, and King James) are missing “without,” instead saying the rains “come down from heaven and do not return there but water the earth, making it bring forth and sprout,” which sounds more like a repudiation of the water cycle. But no matter; other verses, such as Psalm 135:7 in some translations or Job 36:27, speak of vapors ascending from the earth or God drawing up water.

From there things begin to fall apart (the Answers in Genesis list is not long).

The group presents Isaiah 40:22 and Psalm 103:12 as the bible claiming the world is spherical rather than flat (“He who sits above the circle of the earth”; “as far as the east is from the west”). But neither of these verses explicitly makes that case. A flat earth has east and west edges, and a circle is not three dimensional. “Circle,” in the original Hebrew, was חוּג (chug), a word variously used for circle, horizon, vault, circuit, and compass. A “circle of the earth,” the Christian Resource Center insists, refers simply to the horizon, which from high up on a mountain is curved. If biblical writers had wanted to explicitly call the earth spherical they could have described it like a דּוּר (ball), as in Isaiah 22:18: “He will roll you up tightly like a ball and throw you.” This is not to say for certain that the ancient Hebrews did not think the world was a sphere, it is only to say the bible does not make that claim in a clear and unambiguous manner.

The remaining “evidences” are really nothing to write home about. “For the life of the flesh is in the blood” (Leviticus 17:11) is supposed to show an understanding of blood circulation; “the paths of the seas” (Psalm 8:8) is supposed to represent knowledge of sea currents; “the fixed order of the moon and the stars” (Jeremiah 31:35) is allegedly a commentary on the predictable paths of celestial bodies in space (rather than, say, their “fixed,” unchanging positions in space, another interpretation). But none of these actually suggest any deeper understanding than what can be easily observed: if you are cut open and lose enough blood you die, bodies of water flow in specific ways, and the moon and stars aren’t blasting off into space in random directions but rather maintain consistent movement through the skies from our earthly perspective. Again, maybe there were actually deeper understandings of how these things worked, but they were not presented in the bible.

The Jehovah’s Witness website has a go at this topic as well, using most of the same verses (bizarrely, it adds two to the discussion on the water cycle, two that merely say rain comes from the heavens).

The site uses Jeremiah 33:25-26 (“If I have not made my covenant with day and night and established the laws of heaven and earth…”) and Jeremiah 38:33 (“Do you know the laws of the heavens? Can you set up God’s dominion over the earth?”) to argue that the bible makes the case for the natural laws of science. Perhaps, but again, this doesn’t demonstrate any knowledge beyond what can be observed and, due to consistency, called a law by ancient peoples. So maybe it’s one of God’s laws that the sun rises each day. It’s a law that water will evaporate when the temperature gets too high. And so forth. These verses are acknowledgements that observable things function a certain way and that God made it so. There’s no verse that explains an actual scientific principle, such as force being equal to a constant mass times acceleration, or light being a product of magnetism and electricity.

True, it’s sometimes said the bible imparts the knowledge of pi (3.1415926…) and the equation for the circumference of a circle, but this is a bit misleading. There are a couple places where a circle “measuring ten cubits” is mentioned, with it requiring “a line of thirty cubits to measure around it” (1 Kings 7:23, 2 Chronicles 4:2). Pi is implicitly three here. The equation (rough or exact) and pi (rough or exact) were possibly known, as they were elsewhere in the ancient world, as they’re not too difficult to figure out after understanding basic division/multiplication and taking some measurements (once you measure a few circles, build a few round structures, you’d likely notice a circumference is always a little more than 3 times longer than the diameter), but that is not an absolute certainty based on this text. Regardless, neither the equation nor the value of pi are explicitly offered. (Why not? Because this is not a science book.) If these verses were meant, by God or man, to acknowledge or pass on scientific knowledge then they either didn’t have much figured out or were not feeling particularly helpful. “Figure out the equation and a more precise value of pi yourself.”

The Jehovah’s Witness site further believes it’s significant the ancient Hebrews had sanitary practices, like covering up feces (Deuteronomy 23:13), keeping people with leprosy isolated (Leviticus 13:1-5), and washing your clothes after handling a carcass (Leviticus 11:28). However, if you read Deuteronomy 23:14, you see that feces must be covered up so God “will not see among you anything indecent” when he visits. It wasn’t to protect community health — or at least that went unmentioned. Noticing that leprosy can spread and deciding to quarantine people who have it is not advanced science. The guidelines for cleanliness after touching dead animals start off reasonable, then go off the road. Even after washing your clothes you were for some reason still “unclean till evening,” just like any person or object that touched a woman on her period! (If this was just a spiritual uncleanliness, why were objects unclean? They don’t have souls.) The woman, of course, was unclean for seven days after her “discharge of blood.” How scientific.

Finally, this list mentions Psalm 104:6 (“You covered [earth] with the watery depths as with a garment; the waters stood above the mountains”) to posit that the biblical writers knew there was an era, before earth’s plate tectonics began to collide and form mountains, when the earth was mostly water — there is actual scientific evidence for this idea. The verse may be referencing the Great Flood story; verse 9 says of the waters, “never again will they cover the earth,” which sounds a lot like what God promised after wiping out humanity: “never again will there be a flood to destroy the earth” (Genesis 9:11). But if it does in fact reference the beginning of the world, it could be a verse a believer might use to make his or her case that the bible contains scientific truths, alongside Genesis 1:1-10, which also posits the earth was covered in water in the beginning.

There are of course many more alleged scientific truths, most more vague or requiring truly desperate interpretation. For instance, the “Behemoth” in Job 40 is sometimes said to describe a dinosaur, but it in no way has to be one. Hebrews 11:3 says: “By faith we understand that the worlds were framed by the word of God, so that the things which are seen were not made of things which are visible.” That can refer to nothing other than atoms — not any nonphysical possibility like, say, love and the breath of God. Others think a sentence like “all the peoples of the earth will mourn when they see the Son of Man coming on the clouds of heaven” (Matthew 24:30) hints at the future invention of the television! TV is apparently the only way everyone could see an event at the same time — miracles be damned. Still others suggest that when Genesis 2:1 says the heavens and earth “were finished” that this describes the First Law of Thermodynamics (constant energy, none created nor destroyed, in closed systems)! When Christ returns like a thief in the night, “the elements will melt with fervent heat; both the earth and the works that are in it will be burned up” (2 Peter 3:10) — that’s apparently a verse about nuclear fission. One begins to suspect people are reading too much into things.

We should conclude with four thoughts.

This can be done with any text. One can take any ancient document, read between the lines, and discover scientific truths. Take a line from the Epic of Gilgamesh, written in Babylonia: “The heroes, the wise men, like the new moon have their waxing and waning.” Clearly, the Babylonians knew the phases of the moon, how the moon waxes (enlarges) until it becomes full as it positions itself on the opposite side of the earth from the sun, allowing sunlight to envelope the side we can see. They knew how the moon then wanes (shrinks) as it positions itself between the earth and sun, falling into darkness (a new moon) because the sun only illuminates its backside, which we humans cannot see. This line must be in the text to acknowledge and impart scientific knowledge and prove the truth of the Babylonian faith, likely arranged by the moon god mentioned, Sin, or by his wife, Ningal.

This argument is no different than what we’ve seen above, and could be replicated countless times using other ancient books. Perhaps the Babylonians in fact did have a keen understanding of the moon and how it functions. But that does not mean a sentence like that in a story is meant to pass on or even indicate possession of such knowledge. Nor does it mean the gods placed it there, that the gods exist, or that the Epic is divinely inspired. Its presence in a text written between 2150 B.C. and 1400 B.C., even if surprising, simply does not make the book divine. It could be the first text in history that mentions the waxing and waning of the moon; that would not make its gods true.

(By contrast, archaeological and ethnographic research points to the Israelites as offshoots of Canaanites and other peoples around 1200-1000 B.C., with their first writings [not the Old Testament] appearing around the latter date. Though believers want to believe the Hebrews are the oldest people in human history, the evidence does not support this. I write this to stress that, like Old Testament stories taken from older cultures, the Hebrews may have learned of the water cycle and such from others.)

A society’s scientific knowledge may mix with its religion, but that does not make its religion true. Even if the Hebrews were the first group of modern humans, with the first writings, the first people to acquire and pass along scientific knowledge, that would not automatically make the supernatural elements of their writings true. As elaborated elsewhere, ancient religious texts surely have real people, places, and events mixed in with total fiction. If some science is included that’s nice, but it doesn’t prove all the gods are real. The Hebrews knowing about the water cycle or pi simply does not prove Yahweh or the rest of the bible true, any more than what’s scientifically accurate in the Epic of Gilgamesh, the Koran, the Vedas, or any other ancient text proves any of its gods or stories true. That goes for the more shocking truths as well, simply because…

Coincidence is not outside the realm of the possible. As difficult as it may be to hear, it is possible that verses that reference a watery early earth or an earth suspended in space are successful guesses, nothing miraculous required. If one can look up and see the moon resting on nothing, is it so hard to imagine a human being wondering if the earth experiences the same? Could the idea that the earth was first covered in water not be a lucky postulation? Look at things through the lens of a faith that isn’t your own. Some Muslims believe the Koran speaks of XX and XY chromosome pairs (“He creates pairs, male and female, from semen emitted”), the universe ending in a Big Crunch (“We will fold the heaven, like the folder compacts the books”), wormholes (“Allah [who owns] wormholes”), pain receptors of the skin (“We will replace their skins with other new skins so that they may taste the torture”), and more. (Like nearly all faiths, it posits a beginning of the universe too.) How could they possibly know such things? Must Allah be real, the Koran divinely inspired, Islam the religion to follow? Or could these just be total coincidences, lucky guesses mixed with liberal interpretations of vague verses? Supposed references to atoms or mentions of planetary details in the bible could easily be the same. If you throw out enough ideas about the world, you’ll probably be right at times. Could the Hebrews, like Muslims, have simply made a host of guesses, some right and others wrong? After all…

There are many entirely unscientific statements in the bible. Does the ant truly have “no commander, no overseer or ruler, yet it stores its provisions in summer and gathers its food at harvest” (Proverbs 6:6-8), or were the Hebrews just not advanced enough in entomology to know about the ant queen? Are women really unclean in some way for a full week after menstruating, with every person or thing they touch unclean as well? Or was just this male hysteria over menstruation, so common throughout history? If the sun “hurries back to where it rises” (Ecclesiastes 1:5), does this suggest the Hebrews thought the sun was moving around the earth? Or was it just a figure of speech? One could likewise interpret Psalm 96:10 (“The world is firmly established, it cannot be moved”) to mean the earth does not rotate on its axis or orbit the sun. If one can interpret verses to make people seem smart, one can do the same to make them look ignorant. Do hares actually chew their cud (Leviticus 11:4), or did the Hebrews just not know about caecotrophy? Did Jesus not know a mustard seed is not “the smallest of all seeds” (Matthew 13:32)? Likewise, seeds that “die” don’t “produce many seeds” (John 12:24); seeds that are dormant will later germinate, but not dead ones. Some translations of Job 37:18 describe the sky “as hard as a mirror that’s made out of bronze” (NIRV, KJV, etc.). One could also go through the scientific evidence of today that contradicts biblical stories like the order of creation. As I wrote elsewhere, the evidence “does not support the Judeo-Christian creation story (in which birds appear on the same ‘day,’ Day 5, as creatures that live in water, before land animals, which appear on Day 6; the fossil record shows amphibians, reptiles, and mammals appearing long before birds — and modern whales, being descendants of land mammals, don’t appear until later still, until after birds, just 33 million years ago).” Or one could note that the overwhelming evidence for evolution blows up the stories of a first man from dirt and a first woman from rib (if one believes in evolution, it seems one must accept Genesis contains falsehoods, written by ordinary people or spoken by God). Or look at the biblical translations that mention unicorns, dragons, and satyrs, or just argue that supernatural claims of miracles, angels, devils, and gods are unscientific in general because they can’t be proven. But the point is made: the bible takes stabs at the natural world that aren’t accurate or imply erroneous things.

In conclusion, the science in the bible is about what one would expect from Middle Eastern tribes thousands of years ago. There are some basic observations about the world that are accurate, others inaccurate. There are some statements about the universe that turned out to be true, just like in the Koran, but that doesn’t necessarily require supernatural explanations.

Your White Ancestors May Have Immigrated Illegally, Too

It is undeniable that the United States has a long history of extreme racism regarding citizenship. The Naturalization Act of 1790, passed just three years after a Constitution that spoke of “Justice” and “Liberty,” bluntly declared that only a “free white person” could become an American citizen. This remained unchanged for nearly a century, until the 14th Amendment in 1868, passed after the Civil War, determined anyone born in the U.S. was a citizen. This was immediately contradicted by the Naturalization Act of 1870, which declared the only non-whites this change applied to were blacks; the 1898 Supreme Court case of Wong Kim Ark v. the United States finally brought citizenship to all people born here.

As for those already born who desired citizenship, the struggle continued. Women became truer citizens when they won the right to vote in 1920, unless they married an Asian non-citizen; then their citizenship could be revoked! Native Americans — whose ancestors had been here before anyone — had to wait until 1924 to be eligible for citizenship, Filipinos and people from India until 1946. Throughout the 1950s and 1960s, social movements then battled to make what had been promised by law a reality for men and women of color, whether native-born or immigrants.

Given white supremacy’s zealous protection of citizenship, it may seem surprising that there were no laws against immigration itself until 1875, when prostitutes and convicts were barred from entry. (But then, perhaps not so surprising, as most immigrants were from Europe — this despite hostilities towards the Irish, Catholics, Jews, and southern and eastern Europeans. All immigrants represented cheap labor, too.) Before that, immigration was reported but not regulated. Anyone could simply show up and try to scratch out a life for him- or herself. You can come, but don’t expect citizenship, don’t expect any power or participation in this democracy.

Millions came by the time the first racist immigration restriction was created: the Chinese Exclusion Act of 1882, banning almost all immigration from China. Many American whites were openly bigoted, but also spoke of economics — Chinese workers hurting their wages and taking their jobs. Other Asians were banned as well, as were people deemed idiots and lunatics. So it was the late 19th century before illegal immigration was possible, because beforehand there really were no laws against immigration.

Racist laws continued, of course. In 1921, temporary caps were placed on the number of immigrants allowed into the U.S. from other countries; these were made permanent in the Immigration Act of 1924. This was particularly an effort to stem the post-Great War flood of southern and eastern European immigrants, especially Italians, who were coming by the hundreds of thousands. Complaints against them, says historian Mae Ngai of Columbia University, “sounded much like the ones that you hear today: ‘They don’t speak English. They don’t assimilate. They’re darker. They’re criminals. They have diseases.’”

Immigrants from northern and western European nations were favored, including the recent enemy, Germany, which was allowed the most immigrants. (Later, Nazi Germany would justify some of its own racist legislation using American law, which was widely considered the harshest immigration policy in the world; see Hitler’s American Model, Whitman.) In 1929, only 11.2% of yearly immigrants could come from Italy, Greece, Poland, Spain, Russia, and surrounding nations. Only 2.3% could come from outside of Europe, and outside the Americas (the Americas were exempt and had no quotas).

Quotas

via George Mason University

This quota system persisted until the civilizing effects of the civil rights era reformed immigration law in 1965 and opened up the U.S. to more non-European immigrants (though quotas were then put on American countries).

Today, U.S. permanent immigration from other nations is capped at 675,000 people per year, except for people with close family in the U.S. — the number of permanent visas for that category is unlimited. In 2016, 618,000 permanent resident visas were issued. 5 million more applicants wait. No country can receive more than 7% of our visas. Add to this the temporary visas that are successfully converted into permanent ones and around one million people, most from Mexico, China, and other American and Asian nations, achieve permanent residency status here each year. Europeans make up a small minority of immigrants to the U.S.

In today’s debate over illegal immigration and citizenship (solved here), the white conservative trope that Central and South Americans should “do it right, do it legally like my ancestors did” is played on repeat. One has to question, however, whether such confidence is justified. During this period of tight restrictions on European immigrants there were indeed many illegal immigrants from Europe. How certain are you, exactly, that you are not a descendant?

To dodge the quota system, European immigrants would journey to Canada, Mexico, or Cuba and cross the border into the United States. Or they would simply pull ashore. The American Immigration Council documents:

In 1925, the Immigration Service reported 1.4 million immigrants living in the country illegally. A June 17, 1923, New York Times article reported that W. H. Husband, Commissioner General of Immigration, had been trying for two years “to stem the flow of immigrants from central and southern Europe, Africa and Asia that has been leaking across the borders of Mexico and Canada and through the ports of the east and west coasts.” A September 16, 1927, New York Times article describes government plans for stepped-up Coast Guard patrols because thousands of Chinese, Japanese, Greeks, Russians, and Italians were landing in Cuba and then hiring smugglers to take them to the United States.

The 1925 report regretted that the undocumented person’s “first act upon reaching our shores was to break our laws by entering in a clandestine manner.” The problem was so bad that Congress was forced to act:

The 1929 Registry Act allowed “honest law-abiding alien[s] who may be in the country under some merely technical irregularity” to register as permanent residents for a fee of $20 if they could prove they had lived in the country since 1921 and were of “good moral character.”

Roughly 115,000 immigrants registered between 1930 and 1940—80% were European or Canadian. Between 1925 and 1965, 200,000 unauthorized Europeans legalized their status through the Registry Act, through “pre-examination”—a process that allowed them to leave the United States voluntarily and re-enter legally with a visa (a “touch-back” program), or through discretionary rules that allowed immigration officials to suspend deportations in “meritorious” cases. In the 1940s and 1950s, several thousand deportations a year were suspended; approximately 73% of those who benefited were Europeans (mostly Germans and Italians).

The 1929 Registry Act, Steve Boisson writes for American History Magazine, was “a version of amnesty…utilized mostly by European or Canadian immigrants.” Much kinder treatment than mass deportations and separating children from parents, to be sure.

One woman who took advantage of the program, according to The Los Angeles Times, was Rosaria Baldizzi, who snuck in after leaving Italy.

Baldizzi would not become “legal” until a special immigration provision was enacted to offer amnesty to mainly European immigrants who arrived without proper documentation after 1921, who had established families, and who had already lived in the U.S. for seven years. She applied for legal status under the new policy and earned her citizenship three years later, in 1948. Only then, for the first time in more than two decades, could she stop worrying about her immigration status.

If you trace your family history you may be surprised by what you find. According to the Philadelphia Inquirer, Stanford professor Richard White, after researching his family tree,

discovered that his maternal grandfather, an Irishman, had entered the U.S. illegally from Canada in 1924 because he could not get a visa that year under the new quota laws. His grandfather failed in his first attempt, when he walked across a bridge into Detroit, got caught by U.S. customs officers, and was deported.

From Canada, the grandfather called his brother-in-law, a Chicago policeman, who came to Canada and met him there… The pair then walked to Detroit, but this time the brother-in-law, who was dressed in his police uniform, flashed his badge at the customs officers, who waved the duo through.

Even today, there are white undocumented immigrants in the United States. There are 440,000 to 500,000 illegal immigrants from Europe. This includes an estimated 50,000 Irish.

The next time someone declares his or her ancestors came here legally, demand proof at once.

Let Them Flirt

Whether we have a Republican or Democratic president, diplomacy and open dialogue are key to peace with other countries. Given that, Trump is doing the right thing by talking and meeting with North Korea. It’s not a groundbreaking idea, as Obama also expressed willingness to meet with Kim and engaged in diplomacy with Iran that culminated in an important anti-nuclear accord (two things that conservatives who are now just in awe of Trump absolutely lost their shit over at the time; for some reason totalitarian enemies can now be trusted to keep their word, inspections now work, and so forth).

I wish with every atom of my being that it wasn’t Trump in negotiations with Kim, of course. Like, driving someone who’s dying to the hospital is the right thing to do, but do you really want the cat behind the wheel? I guess if Petals is all you’ve got… I’d prefer it be a president with actual political/international diplomatic experience, deep knowledge of North Korea and its regime, better attention capabilities and comprehension skills, fewer authoritarian mannerisms and ideas, and better moral character. I’d also like a president who talked more about negotiating to make North Korea’s horrific, Holocaust-like labor camps, where even family members of people who complain about the regime are starved and worked until death, a thing of the past. Kim doesn’t exactly “love his people,” as Trump says. This issue is just as urgent as ending a nuclear program. Reports suggest Trump didn’t bring up human rights abuses.

I will say, however, that I am pleasantly surprised with what Vox described as a “shockingly weak” concession from the supposed tough guy: Trump said U.S.-South Korean military exercises would cease. Such exercises have always been stupid, near-suicidal acts of aggression on our part. People just don’t realize how close the U.S. has come to nuclear catastrophe, accidental or intentional, over shit like that since the beginning of the nuclear Cold War; it really–and obviously–escalates things…when you want to de-escalate things. So that, if it actually occurs, would be good. We could use less “toughness” in that and other regards. It’s also a good thing North Korea has publicly recommitted itself to doing away with its nukes (the U.S. should of course do the same), as unlikely as that is (being the only deterrent to U.S. invasion), and that Trump spoke of U.S. troops one day leaving South Korea. We just have to hope for the best with these talks; we want these awful, volatile men friendly. The main point is I’d rather have Trump and Kim frolicking arm-in-arm down the streets Pyongyang than threatening each other with nuclear destruction. The world is a safer place under those circumstances.

My Disillusionment With Social Justice Organizing in Kansas City

While originated with a rather different context, Elvis’ line “A little less conversation, a little more action, please” dances through my head when I reflect on the state of social justice organizing in Kansas City. The following thoughts come from observing, co-founding, and being employed by social justice groups here over the past few years. They represent my biggest concerns. As I will emphasize at the end, these problems don’t apply to all organizations nor are they always seen to the same degree.

First, many social justice groups focus heavily on events and gatherings where people simply sit around and talk. For some groups, this is literally all they do — either someone talking at the attendees, participants speaking with each other, or some combination of both. The primary purpose is education, raising awareness, whether concerning ideology, a social issue, an organization’s affairs, and so forth.

Now, this has value. Education, discussion, and perspective-taking are important and have value. But how much I somewhat question (especially speaking comparatively; see next section). The people who come to monthly meetings, community forums, panels, and so on are mostly going to be people who already care about whatever issue or ideology is being discussed, and thus already know something about it. It’s true, no one is ever done learning or listening; and it is further true that there will always be a few newcomers who don’t know anything about racism or socialism or what it means to have no healthcare. But most people who attend probably know a great deal about these things, through personal experience or study or earlier thought and discussion. One gets that impression by observation, at any rate. That’s why I suspect there are real limits to the value of these kinds of events due to the prior interest, knowledge, and worldview of most of the audience. That is not to say they should never be held! It’s simply to question why they should be the majority or totality of a group’s efforts.

Things worsen when these events grow repetitive. There are some organizations’ events I pop into every once in a while, and unfortunately confirm they’re basically the same thing every time. And having been on the planning side of things, I understand why, or at least one of the reasons why: you’re always thinking of the few newcomers. If you dive too deep into an education newcomers will get utterly lost, or at least you fear they will. So you end up sticking with the basics, and boring anyone who knows a bit about the issue.

Therefore, it’s easy to simply stop going to the gatherings of groups whose ideals you earnestly support. You may enjoy conversing with your friends and fellows, and hearing the perspectives of others, but in the end you may not feel you’re learning all that much, things may get repetitive and boring, and it dawns on you that while all this isn’t without value it’s not bringing about social change as speedily as other possibilities. Is sitting and talking really the best use of our time, energy, and money? All this is my experience, anyway. (I recently quit my job over this very issue; it gnawed at me for months, and finally one day I stood up at a conference of social justice groups in D.C., told everyone this was a waste of money and time that could have been better used, and walked out.)

There has to be something beyond sitting and talking. You have to give people who care about these issues something to do. But too often that isn’t coming; organizers and attendees pat themselves on the back as if they’ve accomplished something (I sense that white people at conversations on race especially feel like they’ve done something useful, alleviating their white guilt but not really bettering society much), then everyone starts preparing for the next monthly meeting.

Most importantly, the majority of what many organizations do does not confront power. Resources, time, and human energy poured into sitting and talking aren’t being poured into activities and tactics that put pressure on decision-makers, which does more good for society. Educating yourself and others is just Step One; it is just the first tool in the toolbox of social change. Then you actually get to work. Get out the vote for policies and candidates (if your organization legally can). Put your own initiatives on ballots. Harass the powerful in business and politics with petitions, messages, and calls. Boycott businesses. Protest and march outside workplaces and representatives’ offices. Go on strike, refusing to return to work until your demands are met. Engage in acts of civil disobedience: sit in and occupy your workplace or a political chamber, block streets as the powerful try to head to work, chain yourself to trees, and other illegal acts, facing down the risk of arrest or violence by police or bystanders. And you keep doing these things until you win. That’s how social movements succeed.

We need to shift from education to agitation. Imagine if instead of regular meetings, groups organized regular phonebanking, signature gathering, protesting, civil disobedience, and so forth. Imagine constant disruption on a host of issues. Imagine the impact. We should set specific, measurable goals (local control of the police for instance) and do those things until we win. As long as it takes.

We could combine agitation with service. We could raise money to help pay off people’s medical debts, help create strike funds for workers, organize volunteer efforts to clean up long-neglected neighborhoods, and other tangible ways of helping others. Such things don’t put pressure on power (though they can grow organizations, and solidarity among the people), and they address symptoms rather than the diseases agitation seeks to eradicate, but they’re better than sitting around.

I simply feel that some social justice organizations need to ask themselves: How much of what we do puts the pressure on? Is our money, energy, and time confronting corporate power, political power, police power? Why settle for just 5% or 10% of your activities actually pressuring someone? Why not make it 75% or 80%, and drive social change forward faster, doing more to better people’s lives?

True, some groups face obstacles. You may have very limited resources, making cheap meetings tempting. If you’re a 501(c)(3), you can’t support candidates. If you’re a grant-funded nonprofit, your energy may have to go into what is dictated by (oftentimes corporate) funders. See, what one may wish to do may not have a grant that will fund it; one then must do things according to grants that exist; the requirements to fulfill such grants may not do much good for anyone. It’s a systemic problem. But I nevertheless imagine most slow-moving groups could find some room to shift from education to agitation, despite the challenges. If the limit is 55, why go 25?

Finally, the Left is fractured, which helps no one. Often Kansas City’s communists, socialists, and anarchists are all at each other’s throats. Differences between anti-capitalist ideologies have led some groups to simply declare they’re never working with these other groups ever again. And of course the radical Left as a whole often refuses to work with liberal or center-left groups that aren’t anti-capitalist, even when they’re fighting for a number of identical or near-identical policies. The liberal and center-left groups naturally don’t want to be associated with radicals who carry red flags, wear black masks, and talk about revolution. Yes, there are limits to cooperation here (you’re not going to get some revolutionaries to get out the vote for anything or anyone), and that’s fine, but there are many areas where cooperation is possible but is not being pursued for fairly stupid reasons. It is vital to the future of social justice work, and the future of countless people, for groups to find common ground and stand there in solidarity with each other, despite stark or maddening differences that lie outside such ground.

These divisions are so great that some groups won’t attend any protest or other event unless it’s their own. Unless they’re brought on board as a sponsor, some organizations wouldn’t dream of promoting important actions and activities being conducted by others. It’s not ours, why would we? That’s the attitude, one I’ve wrestled with professionally. Perhaps we feel it makes our own organization seem less legitimate: less of a leader or less independent or less active. Perhaps it’s the fear of lack of reciprocity. We’re spreading the word about their stuff, why aren’t they doing the same for us? There should really be some sort of formal agreement of mutual support for actions and activities that relate to shared values. You don’t have to help organize and plan everything everyone else is doing; just advertise it to your networks to help drive turnout and involvement in confronting power. You don’t have to promote things or participate in things you disagree with, just those you do. That’s solidarity, right?

This article certainly isn’t meant to indict all organizations in Kansas City. There are some that focus their efforts on pressuring the powerful and work with anyone who agrees on the solutions to specific problems. It’s urgent others move in that direction. That’s how we can be most effective at changing society in positive ways and do work we can take pride in.

On Monday, June 11, 2018, I will again be arrested for an act of civil disobedience with Stand Up KC and the Poor People’s Campaign. The time for sitting and talking is over.

If you feel as I do, join us.

Good Morning, Revolution

We have explored in-depth what socialism is and how it works, but it is equally important to consider how to bring it about.

Well, there is a word that has stirred in the U.S., and roared to life throughout history. The great poet and socialist Langston Hughes penned in 1932:

Good morning Revolution:
You are the best friend
I ever had.
We gonna pal around together from now on.
Say, listen, Revolution:
You know the boss where I used to work,
The guy that gimme the air to cut expenses,
He wrote a long letter to the papers about you:
Said you was a trouble maker, a alien-enemy,
In other words a son-of-a-bitch.
He called up the police
And told ’em to watch out for a guy
Named Revolution

You see,
The boss knows you are my friend.
He sees us hanging out together
He knows we’re hungry and ragged,
And ain’t got a damn thing in this world –
And are gonna to do something about it.

The boss got all his needs, certainly,
Eats swell,
Owns a lotta houses,
Goes vacationin’,
Breaks strikes,
Runs politics, bribes police
Pays off congress
And struts all over earth –

But me, I ain’t never had enough to eat.
Me, I ain’t never been warm in winter.
Me, I ain’t never known security –
All my life, been livin’ hand to mouth
Hand to mouth.

Listen, Revolution,
We’re buddies, see –
Together,
We can take everything:
Factories, arsenals, houses, ships,
Railroads, forests, fields, orchards,
Bus lines, telegraphs, radios,
(Jesus! Raise hell with radios!)
Steel mills, coal mines, oil wells, gas,
All the tools of production.
(Great day in the morning!)
Everything –
And turn ’em over to the people who work.
Rule and run ’em for us people who work.

Boy! Them radios!
Broadcasting that very first morning to USSR:
Another member of the International Soviet’s done come
Greetings to the Socialist Soviet Republics
Hey you  rising workers everywhere greetings –
And we’ll sign it: Germany
Sign it: China
Sign it: Africa
Sign it: Italy
Sign it: America
Sign it with my one name: Worker
On that day when no one will be hungry, cold oppressed,
Anywhere in the world again.

That’s our job!

I been starvin’ too long
Ain’t you?

Let’s go, Revolution![1]

People don’t realize their power. They feel helpless in the face of injustice and miseries, not understanding the simple truth, that they have the power to take whatever they want. By joining with others, the people—the workers—can radically transform society whenever they please.

There are many tools in the toolbox of social change, all valuable at creating a better society (despite what anti-reformist puritans may say) but varying in effectiveness. Educate others. Harass the powerful in business and politics through petitions, messages, and calls. Vote for and aid socialistic policies and candidates. Run yourself. Put your own initiatives on ballots. Boycott businesses. Protest and march outside workplaces and representatives’ offices. Go on strike, refusing to return to work until your demands are met. Engage in acts of civil disobedience: sit in and occupy your workplace or a political chamber, block streets as the powerful try to head to work, and other illegal acts, facing down the risk of arrest or violence by police or bystanders. Orwell said, “One has got to be actively a Socialist, not merely sympathetic to Socialism, or one plays into the hands of our always-active enemies.”[2] Malala Yousafzai declared, “I am convinced Socialism is the only answer and I urge all comrades to take this struggle to a victorious conclusion. Only this will free us from the chains of bigotry and exploitation.”[3] The more allies that join the more effective these tactics become, and they have done incalculable good in our own country and around the globe, weakening or defeating occupation, white supremacy, patriarchy, starvation wages, and countless other evils.[4] Progress comes on the backs of the troublemakers.

Though violent revolutions (also in the toolbox) have seen freer, more democratic societies and significant system changes grow out of bloodshed—in our own country and elsewhere—a revolution doesn’t require violence. It may in fact be an insult to the power of the people. Nonviolent mass action (often termed a “revolution” if it grows large enough, though some want the word reserved for violent upheavals) is growing increasingly successful. When political scientists Eric Chenoweth and Maria Stephan examined violent and nonviolent revolutions between 1900 and 2006 they found that nonviolent campaigns were twice as likely to be successful. Since the 1940s the success rate of nonviolent efforts has jumped about 30%, while the success rate for violent efforts has fallen about 30%. The latter are more likely to result in unstable, anti-democratic regimes or bloody civil wars. The researchers found that zero campaigns failed once 3.5% of the population was involved (many won with far less). But only nonviolent revolutions reached this threshold—more people are willing to join a nonviolent revolt and more are physically able to join (children, the sick, the elderly, persons with disabilities).[5] Perhaps no one embodied all this better than Gandhi, who wrote:

My socialism was natural to me and not adopted from any books. It came out of my unshakable belief in non-violence. No man could be actively non-violent and not rise against social injustice, no matter where it occurred…

This socialism is as pure as crystal. It, therefore, requires crystal-like means to achieve it. Impure means result in an impure end. Hence the prince and the peasant will not be equalized by cutting off the prince’s head, nor can the process of cutting off equalize the employer and the employed… Therefore, only truthful, non-violent and pure-hearted socialists will be able to establish a socialistic society in India and the world…[6]

What would a nonviolent revolution that could achieve socialism look like? In short, skip class and work. Spend the day marching through the streets instead—and do not leave until your demands are met. Helen Keller said, “All you need to do to bring about this stupendous revolution is to straighten up and fold your arms.”[7] 3.5% of the U.S. population is a mass strike of 11 million people—and victory could probably be accomplished with fewer. Imagine a million people bringing D.C. to a standstill, with others paralyzing cities across the U.S. When workers come together they can shut down a street, a city, a state, or an entire nation. That’s how you win. Oscar Wilde wrote in The Soul of Man Under Socialism, “Disobedience, in the eyes of anyone who has read history, is man’s original virtue. It is through disobedience that progress has been made, through disobedience and through rebellion.”[8] No violence is necessary; you simply stop producing and bring society to a halt until power yields. True, there is always the risk of being expelled, fired, or arrested, beaten, or killed by the police or army (though they cannot easily get rid of millions of protesters, especially in freer societies). There is no revolution without danger. But prior generations (especially those of color) faced even greater dangers, and with fewer numbers secured lasting victories against our darkest and most oppressive systems. There is truly nothing the people cannot do, if only they unite and refuse to cooperate with power, from the Montgomery, Alabama, boycott that ended local segregated busing in 1956 to the protests that drove out Tunisia’s dictator in 2011.[9] At the time of this writing, in 2018, tens of thousands of West Virginia teachers went on strike, forcing every public school in the state to close, winning higher pay in nine days.[10] Then Arizona teachers, after nine days, won a 20% raise; Oklahoma teachers won the largest pay raise in state history in the same amount of time.[11] The strikes continued to spread. It’s these same proven tactics that can eradicate capitalism, and it is right to use them. Mark Twain said, “I am always on the side of the revolutionists, because there never was a revolution unless there were some oppressive and intolerable conditions against which to revolute.”[12] Langston Hughes wrote:

You could stop the
factory whistle blowing,
Stop the mine machinery
from going,
Stop the atom bombs
exploding,
Stop the battleships
from loading,
Stop the merchant
ships from sailing,
Stop the jail house keys
from turning
…You could
If you would[13]

Ordinary people are going to have to strike for direct democracy, universal healthcare, universal education, and guaranteed work or income. They are going to have to strike for worker ownership, occupying their workplaces and seats of political power. We will have to win a new legal right to equal ownership and power, to go alongside countless other workplace rights that have been won: minimum wage, workplace safety, anti-child labor, anti-discrimination in hiring, and more. This is the only freedom that disappears under socialism: the freedom to be a capitalist, exploiting and holding power over workers. More ethical rights often crush older ones. Kurt Vonnegut said capitalism was simply a set of “crimes against which no laws had been passed.”[14] The right of the worker to a minimum wage abolishes the right of the employer to pay him or her $1 per hour; the right of a person of color to be served at a restaurant ends the right of a white supremacist to deny him or her service; the right to be free crushes the right to own human beings. So will it be with the capitalist organization of the workplace. Victor Hugo warned the rich:

Tremble!…They who are hungry show their idle teeth… The shadow asks to become light. The damned discuss the elect. It is the people who are oncoming. I tell you it is Man who ascends. It is the end that is beginning. It is the red dawning on Catastrophe. Ah! This society is false. One day, a true society must come. Then there will be no more lords; there will be free, living men. There will be no more wealth, there will be an abundance for the poor. There will be no more masters, but there will be brothers. They that toil shall have. This is the future. No more prostration, no more abasement, no more ignorance, no more wealth, no more beasts of burden, no more courtiers—but LIGHT.[15]

Winning these demands is far from impossible. The seeds of American socialism have been long planted. Worker co-ops and direct democracy exist throughout the country. There are growing universal healthcare and tuition-abolition movements, rekindled by Bernie Sanders. One may be quite surprised to learn just how close the U.S. came to universal healthcare, universal early childhood education, UBI, and guaranteed work under Nixon and Carter, among others, after they felt some pressure from the people.[16] Elsewhere national direct democracy, free healthcare, and free college are taken for granted. UBI and the State as the employer of last resort have been tried and accomplished. Co-ops are more common, and workers in capitalist firms are gnawing at capitalist power from the inside—for example German unions fought for and won the right to have representatives on the boards of directors of large corporations.[17] Part of the reason why other countries are ahead of us in these respects is they have much stronger protest movements. In late 2016, India saw the largest strike in world history, with 150-180 million people participating.[18]

The thought of millions of Americans striking should not be inconceivable. Throughout its history the U.S. experienced strikes involving hundreds of thousands—even half a million—workers, many of which were victorious in the end.[19] Dr. King’s 1963 March on Washington for Jobs and Freedom and the anti-Vietnam War protest of November 1969 each had 250,000 in attendance. And protests have only grown. March-May 2006 saw the largest series of demonstrations in U.S. history, as 3-5 million Latinos, immigrants, and allies protested in 160 cities against anti-immigrant legislation.[20] That May Day, the “Day Without Immigrants” saw 1.5 million people refuse to go to work or school.[21] In January 2017, in perhaps America’s largest protest, 4 million people participated in the Women’s March in 600 cities.[22] Cities on every continent joined in. Indeed, international solidarity and coordination are growing. Six to 11 million people around the world protested the planned U.S. invasion of Iraq on February 15, 2003, the world’s largest single-day protest.[23] In October 2011, millions of people in nearly 1,000 cities in over 80 countries rose up to protest economic inequality and the corporate corruption of democracy. 10,000 people marched in New York (Occupy Wall Street), but some half-million protested in Madrid and 400,000 in Barcelona. In September 2014, 400,000 people rose up in New York City, and tens of thousands more in 150 nations worldwide, to push for global environmental protections. There are many more examples.

Human beings are uniting for sanity and justice across the globe. We may yet achieve what Helen Keller envisioned: “Let the workers form one great world-wide union, and let there be a globe-encircling revolt to gain for the workers true liberty and happiness.”[24]

 

Notes

[1] Hughes, “Good Morning Revolution,” 1932

[2] Orwell, “Why I Joined the Independent Labour Party”

[3] http://www.marxist.com/historic-32nd-congress-of-pakistani-imt-1.htm

[4] http://time.com/3741458/influential-protests/; https://www.bustle.com/articles/195826-7-peaceful-protests-from-history-that-made-a-real-tangible-difference; http://www.upworthy.com/7-times-in-us-history-when-people-protested-and-things-changed; http://darlingmagazine.org/5-times-peaceful-protests-made-difference-history/; https://www.vox.com/2016/4/15/11439140/verizon-cwa-strike-2016

[5] Shermer, The Moral Arc, 87-89; https://www.washingtonpost.com/news/worldviews/wp/2013/11/05/peaceful-protest-is-much-more-effective-than-violence-in-toppling-dictators/?utm_term=.28f6dfb17fe4

[6] Gandhi, India of My Dreams

[7] http://gos.sbc.edu/k/keller.html

[8] Wilde, The Soul of Man Under Socialism (1895)

[9] http://www.history.com/topics/black-history/montgomery-bus-boycott; http://www.independent.co.uk/news/world/middle-east/tunisia-tunis-arab-spring-north-africa-revolution-uprising-president-ben-ali-a8158256.html

[10] https://www.cnn.com/2018/02/26/health/west-virginia-map-school-closings-trnd/index.html; https://www.nytimes.com/2018/03/06/us/west-virginia-teachers-strike-deal.html

[11] https://www.cnn.com/2018/04/13/us/arizona-teachers-pay-raise-governor/index.html; http://abcnews.go.com/US/oklahoma-teachers-declare-victory-colorado-educators-walk-class/story?id=54499157

[12] Mark Twain, New York Tribune (April 15, 1906)

[13] Hughes, “If You Would”

[14] Vonnegut, God Bless You, Mr. Rosewater

[15] Hugo, “The Rich”

[16] https://www.vox.com/2014/8/13/5990657/basic-income-jobs-guarantee-child-care-flag-burning-btu-tax-balanced-budget; https://www.bostonglobe.com/opinion/2012/06/22/stockman/bvg57mguQxOVpZMmB1Mg2N/story.html

[17] Wright, Envisioning Real Utopias, 223

[18] https://www.jacobinmag.com/2016/10/indian-workers-general-strike

[19] https://www.vox.com/2016/4/15/11439140/verizon-cwa-strike-2016

[20] https://socialistworker.org/2013/05/14/confronting-anti-immigrant-bigotry

[21] https://www.democracynow.org/2006/5/2/over_1_5_million_march_for

[22] http://www.vox.com/2017/1/22/14350808/womens-marches-largest-demonstration-us-history-map

[23] https://www.huffingtonpost.com/entry/what-happened-to-the-antiwar-movement_us_5a860940e4b00bc49f424ecb

[24] Keller, “Menace of the Militarist Program”

TLJ

Thoughts on The Last Jedi:

 

  1. SAME OL’, SAME OL’

I confess I’m quite baffled some people think The Last Jedi somehow “subverted expectations” and took Star Wars in some bold new direction. Most of it was a lazy copy-paste from the original trilogy, much like The Force Awakens. I get that’s intentional; it’s still bad.

Much of TLJ is a retreading of scenes from The Empire Strikes Back (and Return of the Jedi). The Luke character seeks training from the hermit-like Yoda character; the Luke character goes to a dark creepy cave and hallucinates; the Yoda character tells the Luke character not to go try to help save people; the Luke character and Vader character ride up the elevator to the Emperor character, where the Vader character kills the Emperor character to save the Luke character, of course after the Emperor character shows the Luke character the Rebel fleet being destroyed outside the window; literally Yoda teaches Luke stuff; the main characters escape from their base planet in a ship at the beginning and are pursued by the Empire’s fleet for much of the film; the Rebels hole up in trenches on the Hoth planet and are attacked by Imperial walkers. Worst of all, even much of the dialogue is ripped straight from the originals (“I feel the conflict within you”).

Don’t get me wrong, there were new, fresh elements. The depressed, disillusioned Jedi; Leia showing a new Force power, survival and movement in space; mutiny among the Rebels; Luke’s Force projection; a casino planet; hyperspace kamikaze. These were great ideas, for the most part executed really well (minus the first one, see below, and the fact the Rebels opened a door to space to let Leia in without all dying). But new stuff is something we should expect in movie series, and indeed each Star Wars film has new stuff. Unique elements being present shouldn’t be groundbreaking.    

So why else do people think it subverted expectations? Because Rey’s parents weren’t famous Jedi? Wowwww. Because the Darth Vader character killed the Emperor character in movie two instead of three? Woahhhh. Because we didn’t get a Snoke backstory and Luke doesn’t care about his old lightsaber and rich people fund both sides of the war? Slow clap. Maybe if you expected a higher-quality movie your expectations were subverted.

Think instead about all the ways the film could have betrayed expectations but did not. If Luke hadn’t been redeemed nor helped the good guys in the end; if Rey had taken Kylo’s hand, to either join him in building a new world without the war, try to turn him, or try to kill him later; if Finn hadn’t been saved by Rose, sacrificing a main character. I’m not necessarily advocating these things (except the one about Rey, absolutely), but just making a point about what really would have flipped the script, surprised us, shocked us. But of course Luke will be redeemed, Rey will fulfill her good gal role, and Finn won’t die. How dull.

 

  1. LUKE’S INANE THEORY

The idea of a depressed, hopeless, bitter Luke going searching for the first Jedi Temple at the edge of the universe was great. He’d failed as a Jedi master, lost all his students, and hadn’t stopped Kylo, his own nephew, from going evil. Luke is crushed and ashamed, plus is seeking answers to how things could have gone so badly for him, so he disappears. But those answers in the film make little sense, and TLJ misses a huge opportunity that will haunt me forever.

Luke explains to Rey that the Jedi need to end because they always end up training pupils that turn to the dark side. It happened to Darth Vader and Kylo. That’s the argument, that’s it. This sounds like an 8th grader’s idea. Sure, what Luke is saying is true, but it ignores important realities. A) Don’t the Jedi also do a lot of good that won’t get done without them? Do these positives truly get outweighed? B) More importantly, plenty of other big Sith baddies arise who were not trained by the Jedi. So if you shut down the Jedi, that won’t end the Sith. It’ll just let them take over everything. Which was basically happening. Luke can be depressed, but he shouldn’t be an imbecile.

What irks me is that, despite this being middle school-level thinking, it is actually so close to genius. Imagine if Luke actually found true Enlightenment. What if he’d begun suspecting, feeling in his heart, that something was wrong with the Force. What if he’d read the ancient texts and found a long-lost secret. Namely, that the more the Force is used the easier it is for more people to access (it grows stronger), and because the Force always balances itself, the only way to finally defeat the darkness is to let go of the light. Thus, end the Jedi, shut yourself off from the Force, and so on, which would inevitably lead to Kylo’s death, Snoke’s death, a weakening of the Force and the start of a new era without it. (The era doesn’t have to last, Disney has more movies to make, but it’s an interesting story for this trilogy.)

(This would explain why Rey, and the random kid with the broom on the casino planet, are so powerful and use the Force easily, without any training — the dark side’s growing, so more people can more easily access the Force, and the “light rises to meet” the darkness.)

Rey could have come to see this wisdom. She would have resisted at first, but her arc throughout the movie could have been to end up thinking as Luke did, and thus would have taken Kylo’s hand in hopes of convincing him too. Episode 9 would have been that struggle, and eventually Kylo would either come to agree or have to be killed; either way the trilogy ends with Rey being selfless, giving up any idea of becoming a Jedi, letting go of the Force, and as a result helping end the dark side and the Sith. That would have been a bold new direction, unique. (But no, Episode 9 will probably be good v. evil, where good wins, per usual.)

Luke could have either gone against what he’d learned to save Leia and the others as TLJ envisioned, leaving Rey to clean up the mess and get things back on track, or stuck to his guns, his Enlightenment, perhaps by physically going to the salt planet to stall for time, save the Rebels, and sacrifice himself, but not using the Force.

 

  1. THROW AWAY CHARACTERS & PLOTS

Like a lot of action films too timid to kill main characters, TLJ creates a throwaway character to fulfill the needs of a plot with a cool hyperspace kamikaze attack in it. This is Holdo, who we meet in TLJ and never really have a reason to care about. Thus her sacrifice has no emotional impact, and neither does the scene. Imagine if it had been Leia, or Po, or R2-D2, or literally anyone we had a relationship with. Even Akbar would have been better (instead he’s simply blown out the window and forgotten about early on).

Snoke is likewise a throw away character, even in The Force Awakens. He really serves no purpose in either movie, and really should never have existed. The plot needed Kylo to go evil, and no one could think of any other way to bring this about other than whipping up an Emperor 2.0 (the fact Kylo is blood related to Darth Vader, and curious about him, wasn’t enough apparently). We don’t know anything about Snoke, other than the one-dimensional trait of him being a bad guy wanting to, yawn, rule the galaxy, and thus we don’t care about him. He’s promptly murdered to take care of this issue. He’s pointless, and I think the creators realized it.

Another one is Phasma (literally just had to look up her name), who we saw for about 5 minutes in The Force Awakens. The creators seem to think that’s enough build-up to a big Finn-Phasma rivalry, animosity, and duel. Phasma dies and it’s hard to care.

Here is an appropriate place to include stupid cameos in the film. This may seem like splitting hairs, but so be it. Maz Kanata’s shoehorned appearance I didn’t mind too much, even though it felt like fan service or just a reminder that she exists. But I thought it wasn’t realistic to this world, and a lame attempt at humor, that she took a holo-phone call during a battle, and had prefered her as just an old bartender rather than a hero Rebel warrior. But no matter. Yoda’s cameo was the painful one. He looked awful, for some reason had reverted to the crazy act he played for an hour with Luke in The Empire Strikes Back, and his presence, for me, was just another reminder than Luke should have reached Enlightenment in this film, should have gained, painfully, the wisdom that would change everything. He shouldn’t have needed Yoda. Instead, Luke needs to learn another lesson from him. The Empire Strikes Back Again.

Rose isn’t a throw away character necessarily, but only exists to join in a throwaway plot. The journey to the casino planet, a location I find cool, simply made a long movie longer. It’s pretty clear the creators just wanted to give the Rebels, stuck on a ship being pursued, more to do. Thus, some Rebels sneak away to the casino world to find a hacker, and other Rebels stage a mutiny on the ship. Having both was really unnecessary. Imagine if Finn and Rose had simply joined in with Poe on his mutiny, and the film focused deeper and longer on the causes, planning, execution, and consequences of the mutiny. That would have cut out a pointless third plot. Then more time could have been spent on Rey and Luke, too, the main event.

This being said, Rose is sort of a shapeshifting character. She’s basically whatever the plot needs her to be in the moment. When we first meet her, she goes from heartbroken sister to fangirl Finn worshipper to badass Rebel guard in seconds, enough to give whiplash. The plot wants some low-quality CGI horse creatures to trample a casino, but wants it to have some emotional weight and justification, so Rose exists. Her home planet, we learn, was robbed to feed fat cats like those at the casinos, and she broadcasts what’s about to happen (cringe) when she says “I’d put my first through this place if I could.” Then, like magic, it happens! What a coincidence. (Between the cheesiness, spoilers, and bad CGI, this felt more like prequels-level stuff, as did BB-8’s operating a walker toward the end.) And of course, when Finn tries to sacrifice himself to save all the Rebels in the mountain, Rose becomes his lover, crashing into him to save him — presumably dooming all the Rebels. She says they had to “Save who we love”…uh, that’s what Finn was doing (and what she was certainly not doing, if she had any love for the other good guys). Before this moment, Rose seemed like a decent person who cared about the Rebels. But Finn needed saving. Thus now she’s the embodiment of selfishness, willing to let them all die, for a guy she met yesterday. I get that her sister died in battle and she doesn’t want to lose someone else, but we got zero indication she was capable of this monstrously unethical act (which the creators pass over like it’s nothing and will probably not address in the next film).                  

 

  1. FAILURE OF THE THEME OF FAILURE

While I don’t really think TLJ was sophisticated enough for themes, it’s supposedly all about failure. That’s the theme. Yoda says it. Failure’s the best teacher. That is always an interesting motif, but it’s not wholly accurate here. There’s less teaching and more just…lucking out.

True, lots of things go wrong for our characters. But, as my brother Sam pointed out, there’s no consequence to any failure. Seriously. Finn and Rose fail to find the hacker; it’s OK, another one happens to be in the cell they’re locked in. What a happy coincidence. Rey fails to be properly trained by Luke; no problem, she is still able to lift a mountain of rocks and save everyone in the end. Poe fails to follow orders, and his mutiny fails; he learns a lesson, but he’s never really punished. The Rebels fail to disable the bad guys’ tracker; it’s fine, a throw away character saves them all. Finn fails to sacrifice himself; it’s good, all the Rebels make it out of the mountain anyway.

“Inconsequential failure.” Great theme.

 

  1. REY AND CHARACTER DEVELOPMENT

I wish the creators had written better Rey-Luke dialogue and not left their relationship seeming so…underdeveloped. More broadly, Rey, our main character, doesn’t really have much of a character arc. That’s what makes stories interesting: when characters face struggles and change, for good or ill, because of them. Sure, Rey becomes more friendly with Kylo, which I liked (Reylo is absolutely how this trilogy should end; it would have been cooler in my version, where Kylo is convinced after much struggle to let go of the Force, but whatever). Sure, she gives up on getting Luke to come with her and goes to fight on her own. But are these quality arcs? Not really. Overall, she ends the movie where she began: a hero, fighting for righteousness, who is super strong with the Force despite no training. Her perceptions and beliefs and attitudes haven’t really changed. At least in The Force Awakens she lets go of staying behind on Jakku to await her family, accepting they are never coming back, freeing her to a life of adventure. That’s a big difference in her between the beginning and end of the film. Rey is our main character, our beloved hero. She needs an arc with substance.

Other characters get more. Luke is redeemed. Poe perhaps learns to not be such a hothead, to follow orders, because you may not have the full picture. Finn wants to run away and save himself at the start, then is willing to die for the Rebels in the end. Kylo has a slight arc, changing from someone who seeks Snoke’s approval and spars with Hux into a strongman who needs nor tolerates either. Not every character in a film needs a substantial arc, but the main one does. Rey is left out, and thus her story in TLJ isn’t as interesting as it might have been.

 

  1. DIALOGUE

Don’t let the brevity or position of this last point fool you: dialogue is a massive problem in this movie. Most lines are very poorly written, making them difficult to deliver even for decent actors, like when Luke explains why the Jedi have to end because they train future Sith. There are moments when characters literally sound as if they are reading off cue cards, offering a bland, stale, I-am-acting delivery, notably during one scene when Rey is asking for Luke’s help (for the third or fourth time) by Luke’s meditation rock. Many lines are cheesy, such as when Finn and Rose express their delight that they just destroyed the casino, and everything sounds like a cartoon. In action-adventure films like this, a little bit of cheesiness can make for some funny moments, but The Last Jedi, sadly, shows the peril of overdoing it.

The Declining Value of Art

What gives art value? That is, inherent value, not mere monetary value. Perhaps it is actually quite similar for artist and spectator. The artist may impart value on her work based upon how much joy and fulfillment the process of its creation gave her, how satisfied she is with the final product if it matched or came close to her vision, how much pleasure others experience when viewing (or listening to) it, or how much attention, respect, and fame (and wealth) is directed her way because of it. Likewise, the spectator may see value in the work because he knows, perceives, or assumes the joy and satisfaction it might give the artist, he’s interested in and enjoys experiencing it, or because he respects a successful, famous individual.

There are various forces that impart value, but a significant one must be effort required. This is, after all, what is meant by the ever-present “My kid could do that” muttered before canvases splattered with paint or adorned with a single monochrome square in art museums across the world — pieces sometimes worth huge sums. People see less value in a work of art that takes (on average between human beings) less effort, less skill. Likewise, most artists would likely be less crushed were a fire to consume a piece they’d spent a day to complete versus one they’d spent a year to complete. To most people, effort imparts value.

I’d be remiss, and haunted, if I didn’t mention here that this demonstrates how most people think in Marxian ways about value. (If you thought, dear reader, that in an article on art you’d find respite from socialist theory, you were wrong.) Marx wrote that “the value of a commodity is determined by the quantity of labour” needed to create it (Value, Price, and Profit). Again, not mere monetary value. This doesn’t mean “the lazier a man, or the clumsier a man, the more valuable his commodity, because the greater the time of labour required for finishing the commodity.” Rather, Marx was speaking about the average labor needed to create something: “social average conditions of production, with a given social average intensity, and average skill of the labour employed.” Labor, effort, imparts value on all human creations, whether it’s art, whether it’s for sale, and so forth. Doesn’t it follow, then, that what takes less effort has less inherent value?

This train of thought — how the effort put into paintings, drawings, writings, photographs, sculptures, music, etc. affects their value — arose during an interesting conversation on how much respect should be awarded to each of these forms. Respect was based on effort-value. In other words, does a “good” photograph deserve the same respect as a “good” painting? Does a “great” piece of writing, like a book, deserve the same admiration — does it have the same value — as a “great” sculpture? One may feel at first that they shouldn’t be compared. But all forms have value because they require effort, and thus if we can determine how much effort, on average between human beings, is required for two compared art forms and then decide one takes more effort we will have also found a difference in value. (One need not worry about “great” being subjective, because we are only talking about how each individual personally views the value of different art forms; perceived effort will also be subjective, which is the whole point, as it determines one’s view on value.)

If it helps make this clearer, we might start with a comparison within a single form. Which takes more effort on average: to record a single or an album? Cartooning or hyperrealist drawing? Most people would say the latter finished products have more value because of the greater effort typically required (work may be a breeze for some hyperrealist artists, as easy as cartooning for cartoonists, but remember we are speaking of averages).

Now what about the average effort to create a “good” photograph versus, say, a “good” (let’s say realist) painting? It seems like it would certainly take more effort to make a good painting! The technology of photography always advances, making tasks easier and more accessible, and thus grows more widespread. After film yielded to digitalization and computerization, it became much easier to take a nice photograph — it’s easier to do and easier to do well. Exposure, shutter speed, aperture, ISO sensitivity, focus, white balance, metering, flash, and so on can now be manipulated faster and with greater ease, or automatically, requiring no effort at all. Recently it’s become possible to edit photographs after the fact, fixing and improving them. You just need a program and know how to use it. Because the form has never existed without technology, the average effort to create a great photograph has probably never rivaled the average effort to create a great painting, but the gap was smaller in the past. Today anyone with the right technology can produce a great photo; true, it requires know-how, but surely the journey from knowing nothing to mastery is shorter and easier than the same journey for realist painting. (Film — now digital video — production is a similar story.) Because the effort needed for the same result — a good photo — has declined over time, the value of the form overall has also decreased. (This does not mean some photographers aren’t more creative, skilled, or knowledgeable than others, nor that there doesn’t remain more value in the work of hardline traditionalists who refuse to use this or that new technology.) But painting — the technology of painting — hasn’t really changed much through the ages; it still requires about the same effort to produce the same quality work, therefore its value holds steady. If “painters” start having robots paint incredible works for them, or aid them, there would obviously be a reduction of value. No one is as impressed by robot paintings or machine-assisted paintings.

Music is facing a smaller-scale attack on the value of the form with digitally created instrumentals, autotune, and so forth. Perhaps the value of writing declined slightly as we shifted from penmanship to typewriting to computer-based writing (with backspace and spell-check!). It will decline again as voice transcription programs are perfected and grow in popularity.

Sculpting, painting, and drawing — the forms least infected by technology — still essentially require the same effort to do, and same effort to do well, as they have throughout human history. The tools and equipment have changed some, yes, but not nearly as much as those of other forms. Their value will remain the same as long as this state of affairs persists. If music, writing, film, and photography continually grow easier to do well, their value, by this metric, will decrease, slowed only by those who valiantly resist the technological changes. This does not mean a splatter painting automatically has more value than a beautiful photo — remember we’re each personally comparing the value of what we subjectively see as “good” paintings versus “good” photographs; you may not see a splatter painting as good. Rather, it may simply mean that what you see as a good painting takes more effort on average to create, and thus has more value, than what you see as a good photo. Perhaps also more than a good book, song, or video, depending on the size and scope of the projects being compared (it may surpass a good video but not a good film, or a good short book but not a good tome; up to you).

It could be that effort required is somewhat rule-based, too, rather than just technology-based. Music, writing, film, and photography rely on more rules. That’s probably why technology is encroaching quicker on such forms. In music, keys, pitches, quarter-notes, half-notes and so forth are rules. Build a program that knows and follows them and you don’t need human players or singers anymore at all. Writing has spelling, grammar, and punctuation rules. So spell-check and A.I. can help you or do it all for you. Film has frames per second, photography f-stops, and together a thousand other rules. Devices can handle them. Artists break the rules all the time, but that doesn’t mean their form doesn’t rely more heavily on them than other forms.

Sculpting marble or clay into something recognizable, adorning a canvas with life, or sketching a convincing face perhaps are not activities that rely as much on rules. This does not mean there are none; for instance there are drawing guidelines to make a face proportional and grids to help you transfer reality to the paper. Again, the rules may or may not be followed. And this does not mean an A.I. couldn’t do such activities, because it could. It’s just hard to define what rule you’d use to draw something so perfectly it looks like a photograph; but you know you have to hit certain notes to sing something perfectly. You have to be talented to do either — but maybe one has more foundational rules to get you there.

I’ve sometimes wondered if closing the “effort gap” or “talent gap” between novices and incredible artists is easier in some art forms than others. Meaning, is the gulf between an inexperienced writer and an incredible writer smaller than the gulf between an inexperienced painter and an incredible painter? What about the gap between a new photographer and masterful one compared to the gap between a new sculptor and a highly advanced one? On average, that is. I would suppose the art forms that in any given era take more effort would have the largest chasm to cross. So it would be harder to become a master painter than a master photographer. Perhaps harder, also, than becoming a master cinematographer, writer, singer, or even musician. (I think this view explains why I personally respect and admire the best works of sculpting, painting, and drawing more than the best works of other forms, though music is high up there too. And that’s coming from a writer.)

If so, perhaps rules have something to do with it. We know that practice makes perfect. Some are born with unique gifts, no question, but others go from zero to hero through practice. Might more rules make it easier? Do human beings learn better, faster, with those defined rules? If you stripped away the aforementioned technology of singing, music, and writing (it’s impossible to do this with photography and film), would the rules of the forms alone make these things easier to master than art forms with fewer rules? It’s interesting to consider.

Conservatives Are More Likely to be Racist

One early morning at Salem State University in Massachusetts, students stumbled upon vandalism of benches and a fence at the baseball fields. Spray paint had been used to write “DIE NIGGERS,” “Whites Only USA,” and “Whites #1.”

What are your first thoughts concerning who did this? You’re a reasonable person, so you know this might be a hoax. That happens from time to time. But if this was done in earnest — by someone who sincerely wanted to degrade and threaten black people and extoll the white race — who seems most likely? It seems likely the culprit was white. Gun to your head, it was probably a man, or more than one, just a couple buddies out having some “fun.” Perhaps someone younger, a student; this is a school, after all. Now, was this person more likely liberal or conservative? Who would be more likely to write “Whites Only” or “DIE NIGGERS”? Left or Right, quick.

If this was no hoax, and if we were all to be honest with ourselves, the probabilities might increase as we move along the political spectrum. In other words, the far Left seems least likely (recall we’re focused on content here, not the act of vandalism itself, which some on the far Left do happily partake in), the mainstream Left still unlikely, the center perhaps somewhat likely, the mainstream Right more likely, and the far Right most likely. At no spot on the spectrum is the act impossible, but such a probability scale shouldn’t be all that controversial for anyone with a handle on reality.

In this particular case, we needn’t wonder long, as the vandals included “Trump #1” in their graffiti. This was part of the hate crimes that swept the U.S. after Trump’s election, as Trump supporters gleefully attacked, verbally and physically, Hispanics, Muslims, blacks, Jews, gays, and women — weeks of terror.

But, one protests, the answer to the theoretical was biased and the anecdotal is weak argument. True enough. Conservatives and liberals always dig up examples, point at each other, and insist the other ideology is more prone to racism. (Here we mean against people of color; conservative whites who think anti-white hate from liberals is a bigger problem will have to educate themselves elsewhere). How can we know who is right?

One way is to simply ask people their views.

In 2014, Nate Silver and Allison McCann looked at Americans’ answers regarding race in the General Social Survey, which has been issued for decades. Self-described Republicans were, from 1990-2012, 5-10% more likely to object to a close relative marrying a black person, 5-20% more likely to believe blacks “lacked the motivation” to get out of poverty, and 2-10% more likely to say blacks are more lazy than hardworking. 2-5% more Republicans thought blacks were more unintelligent than intelligent, until things evened out between liberals and conservatives in 2009.

Things have been about even regarding comfort with living in a diverse neighborhood, with only occasional spikes in conservative opposition, and even concerning voting for a black president, except between 1994 and 2007, when in fact white Democrats expressed stronger opposition.

The good news is that for both groups racist views are in general declining. Majorities today do not have (admit) explicitly racist views; this article is not intended to posit all conservatives are racist. The bad news is that for both groups today over 20% dislike the idea of living in a neighborhood that isn’t majority-white, over 20% oppose interracial marriage in their family, over 30% think blacks are lazy, over 40% that they lack motivation, and 15% that they are unintelligent. And that’s just the Americans that will admit to extreme (conscious) racism, as this is a survey. So while this article is indeed intended to settle a recurring debate, it is also a condemnation of (and call for reflection from) us whites on the Left. Our scores, while better, are hardly anything to celebrate.

The aggregate of all responses looked like this:

Screen Shot 2018-03-16 at 1.11.45 PM

The 2012 American National Election Studies survey revealed similar answers. 18% more white Republicans saw black people as lazy than white Democrats, with an 8% lead concerning belief in lack of intelligence and an 18% lead in thinking blacks had too much influence in politics (at the time, there was a black president, one black Supreme Court justice, and no black senators; the country had seen a single black president, six black senators, and two Supreme Court justices since 1776). Nearly 35% more white Republicans thought blacks would be just as well off as whites if they’d try harder — a belief requiring a racist premise about black laziness.

Screen Shot 2018-03-16 at 7.39.17 PM

But the data from these two surveys, and others, can be a bit misleading — and not in a way that will comfort the Right. By lumping together Democrats of all sorts (centrist, Left, far Left), and doing the same with Republicans, the data reflects more timid differences in ideological views of race. As we move further to the right, views grow increasingly racist; as we move further to the left, views become decidedly less racist:

Among strong Democrats and strong Republicans, the numbers [concerning who thinks blacks are lazy] become even more stark, 20 percent compared with 46 percent. Furthermore, 41 percent of whites who say they are extremely conservative believe black people are lazy, compared with 14 percent of whites who say they are extremely liberal. On the question of whether black people are unintelligent, it’s 30 percent for extremely conservative whites versus 11 percent for extremely liberal whites. This clearly suggests that racial animus is more prevalent among conservatives and Republicans.

That is significant. It also mimics the probability scale envisioned above.

A 2016 YouGov survey asked white people if they thought black people typically “give more to society” or “take more.” For a large majority of conservative respondents, no amount of good black people do for society — teaching students, creating art, running a business, waving hello, nothing — could outweigh the racist laziness myth.

Screen Shot 2018-03-16 at 3.21.41 PM

In an article called Trump Did So Well Because Many Conservatives Are Just Like Him, I collected surveys and studies to show how a significant portion of Trump supporters (though not all) hold extremely bigoted views. But the article didn’t dive into how much worse these views were compared to Clinton supporters. A 2016 Reuters/Ipsos poll of 16,000 Americans found that

In nearly every case, Trump supporters were more likely to rate whites higher than blacks [concerning positive traits] when their responses were compared with responses from Clinton supporters.

For example, 32 percent of Trump supporters placed whites closer to the top level of “intelligence” than they did blacks, compared with 22 percent of Clinton supporters who did the same.

About 40 percent of Trump supporters placed whites higher on the “hardworking” scale than blacks, while 25 percent of Clinton supporters did the same. And 44 percent of Trump supporters placed whites as more “well mannered” than blacks, compared with 30 percent of Clinton supporters.

Trump fans were also more likely to dislike minorities compared to other, more sane, Republican voters.

There is a wealth of other surveys that show comparable results to the four included here; they are not difficult to find.

Moving on from surveys, there are also scientific studies that indicate conservatism is deeper in the racist mud than liberalism. Research shows that dislike of government services and spending, especially welfare, increases as racial animosity does. A 2014 study from Northwestern University showed that whites with no political affiliation more strongly favored conservative policies when distressed over increasing racial diversity in the U.S. In fact, even those with a political affiliation — any — who became distressed moved to the right. A 2012 study of the U.K. showed social conservatism is linked with greater prejudice. Conservatives were less likely to agree with statements such as “I wouldn’t mind working with people from other races.” Other studies link antiracism and social liberalism. A 2013 study found that American conservatives had less favorable views of black people than liberals, unless black people had conservative values and attitudes (liberals also favored persons of color who thought like them). As with Trump, greater anti-black attitudes among citizens more strongly predict votes for the Republican candidate, even when he’s not running against a black man, for example with Bush. Areas of the South with histories of strong Klan activity correlate with stronger Republican loyalty. And so on.

No, not every survey nor study will fit into this pattern, but most do. That consistency across sources deserves serious consideration.

All this makes sense in light of what “conservative” and “liberal” actually mean at the conscious and subconscious levels — and how their adherents opposed or supported the civil rights movement, and other social movements, based on those meanings (see Which Broadened Freedom For the Oppressed? Liberalism or Conservatism? and Why Liberals and Conservatives Think Differently, From Someone Who’s Been Both), regardless of ideological changes within America’s parties, a topic conservatives who insist “Liberals are more racist because the Democratic Party supported slavery and the KKK” desperately need to study (see Republicans Used to be Liberal, Democrats Conservative). While not all conservatives are racist by any means, the evidence suggests that, while both sides have work to do to master true racial tolerance, more conservatives lag behind.

The Case For Direct Democracy

Ultimately, “socialism” is the idea that power, not merely wealth, should be made “social”—spread out among the people. That is to say, socialism simply means more democracy. We have seen how worker cooperatives are more democratic structures than capitalist businesses, relying on representative democracy (elected, removable managers and executives) or direct democracy (all decisions made by all workers on a one-person one-vote basis), sometimes called pure democracy. On a similar note, the solution to our troubled political system is a more democratic structure. Under such a system, the people control their own destiny.

Jack London wrote that socialism’s

…logical foundation is economic; its moral foundation, “All men are born free and equal,” and its ultimate aim is pure democracy. By “all men are born free and equal” it means born free and with equal opportunities to earn by honest labor—mental or physical—a livelihood. By a pure democracy is meant a form of government in which the supreme power rests with and is exercised directly by the people instead of the present form, which is a republican form of democracy, in which the supreme power rests with the people, but is indirectly exercised by them, through representatives. Representatives may be corrupted, but how could the whole people be bribed?[1]

Imagine having a direct say in public policy: the ability, like Congress has now, to vote yes or no on proposed laws. Imagine heading to your voting place not every two or four years, but instead many times each year. Your vote would decide national policy. There is more than one reason for America’s abysmal voter turnout, but a large part of it is that people do not believe their vote will affect anything, will bring about meaningful change.[2] With politicians mostly representing the interests of the rich individuals and corporations that fund them, this attitude is understandable. Imagine how this could change if the people had real power, living in a society where the citizens controlled the State rather than the reverse? As London pointed out, it would be very difficult for special interests to influence policy. Citizens are not running for office. They cannot be bribed with campaign contributions, probably won’t be involved in secret meetings or backroom deals. Corruption on a scale that would be effective and remain secret would be impossible. This does not mean there wouldn’t be challenges—when a popular vote takes place the key for special interests is to attack information itself, misleading the public into voting a certain way. But there is no question that giving all voters lawmaking power would decimate corruption.

How would this work? Citizens would need direct initiative rights. Such rights allow people to place a proposed law on an upcoming ballot for people to vote on. Passionate individuals work together to draft legislation, file it with local officials, and gather the required number of signatures to put it on the ballot (no, this is not something a couple of jokers can do in an afternoon; it has to have a reasonable, serious level of support). After the vote takes place, and if the measure passes, government departments enact and enforce the measure as they do today after a legislature passes a law. “Imagine everybody governing!” exclaimed Victor Hugo, who had socialist leanings even if he never adopted the label. “Can you imagine a city governed by the men who built it? They are the team, not the coachman.”[3] And not just one’s city, of course, but one’s state and nation—people’s legislation and the people’s say at every level.

This is a radical change. Socialism would take decision-making power away from city councils, state legislatures, and the U.S. Congress and give it to constituents, ending these institutions as we know them. Rather than electing people to vote on issues for us, we could elect or approve people to enact and enforce the decisions we make: the heads of government departments. Today the president selects a secretary of education, homeland security, transportation, and so on, as well as the heads of the CIA, FBI, and other agencies, and Congress approves them. Then they take congressional legislation and make it a reality. Tomorrow the people will either elect candidates to these positions or take over the traditional role of Congress and approve or disapprove the president’s selections. Those directly responsible for carrying out the people’s will should be answerable to the people, just as presidents and representatives are today. (In contrast to today, candidates, from multiple parties with equal ballot and debate access, will either enjoy publicly financed elections or rely on small donations from individuals—co-ops and organizations should not be able to give, to avoid quid pro quo politics. A $100 cap for each adult leaves $25 billion for candidates to compete for.)

Such a proposal may cause consternation. Arguments about tradition will sound: the U.S. was founded as a representative democracy so we mustn’t change it. Well, systems, laws, and practices can always be improved, and typically are. The U.S. scrapped its first constitution, the Articles of Confederation, after seven years because its designed structure was flawed and ineffective. The 12th Amendment got rid of a system where the losing opponent in presidential races became vice president. In 1913, we finally let the American people directly elect senators. The 22nd Amendment created presidential term limits. Socialists are interested in positive change, not tradition. Which helps explain why American socialists were at the forefront of every major justice campaign—abolition and civil rights, women’s rights, labor rights, the anti-war movements, etc.[4] The U.S. has a rich socialist history, from socialists writing the “Pledge of Allegiance” to founding the Republican Party![5]

One major objection is that it’s a bad idea to give the people so much power, as they could vote for awful things, with a mere 51% majority ruling over and oppressing the minority (“mob rule,” “tyranny of the majority”). That’s what the founding fathers knew, so best to trust them. It’s true that most of the founders detested democracy, in fact because they saw it as a threat to their riches and power.[6] (The same sentiments were expressed by the powerful later on, such as in the Trilateral Commission’s 1975 Crisis of Democracy report.[7]) So they made sure ordinary voters could not elect justices (we still do not), nor directly elect the president (we still do not, as the Electoral College persists), nor directly elect senators. The people only directly elected members of the House, yet only (white, male) property owners were allowed to vote, further disenfranchising the poor and keeping power in the hands of the better off. Only in 1856 did the last state, North Carolina, do away with property requirements to vote.[8] Yet somehow people who gripe about majority rule don’t realize that’s how it works right now. While sometimes the bar is higher, a simple majority decides the fate of most bills in Congress. As little as 51% of congresspersons rule from issue to issue. A majority carries the day in city councils, state legislatures, Congress, and every election except the presidential election from time to time. Direct democracy simply alters which majority makes decisions, giving ordinary people a direct say in the decisions that affect them. “Fear of the mob is a superstitious fear,” George Orwell wrote. “It is based on the idea that there is some mysterious, fundamental difference between rich and poor… The average millionaire is only the average dishwasher dressed in a new suit.”[9] Yes, the majority has the power to make awful decisions—in the precise same way Congress and other bodies do now. But you nevertheless had a say in the matter, whether trying to stop a bad idea or joining others in making a mistake. As with worker cooperatives, it is better that the many fail together by their own hand than be destroyed by the few from above.

Additionally, there are limits to the awful things that a popular will could enact. Yes, mistakes will be made. That’s democracy, whether direct or representative; it’s messy. But remember, checks and balances still exist under this system. It’s true, there is one fewer; today a bill must pass both House and Senate to see the light of day, while direct democracy replaces them with one chamber, the people. (There are countries, such as Denmark, Luxemburg, Sweden, Finland, Israel, and New Zealand, which only have one house, a unicameral congress.[10]) But there would still be a president to veto legislation. There would remain a Supreme Court to declare laws unconstitutional. Only a supermajority of the people could change the Constitution, as it is with Congress today (state legislatures holding a constitutional convention would not be possible, as state legislatures would be replaced by a state’s populace). Fears about the prejudiced majority oppressing smaller groups of people can be put aside. It’s possible, but no more likely than it is now, because checks and balances will be preserved. And it goes without saying that direct democracy gives the people power to end injustices too. As Arthur Miller, best known for Death of a Salesman and The Crucible, said, “Socialism was reason.”[11]

The most sensible concern is how direct democracy can be structured to run well. Much legislation today is very long and highly complex. Bills are introduced by politicians and go through committees, where representatives of different political views research, discuss, and modify them. They go to the House or Senate floor for debate and more changes and amendments before the vote. With direct democracy, aren’t we sacrificing a crucially important vetting and compromise process? Are ordinary people who use initiative rights really smart enough and experienced enough to create laws? Won’t some laws have to be so complex, and so full of unintelligible legislative jargon, that a typical American voter would be unable to make an educated decision on it? With many bills being hundreds of pages or over a thousand, will not the length alone dissuade people from voting or encourage voting without reading through the details?

While a “vetting and compromise process” is valuable in theory, in practice all it means is total gridlock and the death of the bill. Only 1-5% of all the many thousands of bills introduced under each Congress become law.[12] Almost all of them die in committee, never making it to the debate floor.[13] This is not because they are all bad bills, but because the parties don’t agree on anything. Americans are tired of such inaction, and direct democracy is the cure. Some may say why not keep Congress, let it craft laws, and require a popular vote to pass (a referendum democracy). While this, whether or not combined with initiative rights, would be far better than a representative system, it would nevertheless 1) still allow special interests to infect legislation, which the populace would likely remain unaware of when voting and 2) would require committees and compromise to be at all meaningful (otherwise it’s just groups of similar thinkers putting what laws they like before the people, i.e. the initiative process), resulting in the usual gridlock. But direct democracy in fact has its own vetting mechanisms. If an initiative petition cannot garner enough support, it dies. If the question makes it to the ballot and is not quite what most people want, it will fail. Vetting lies in the discussion and debate surrounding proposed legislation before the vote, as citizens of different opinions study it, weigh it, and try to convince others to vote this way or that.

The rest of the questions, concerning the competencies of the people getting questions on the ballot and the complexities of legislation, are not major concerns when we study deeper how the initiative process actually functions. Because filing the legal paperwork, gathering enough petition signatures, and getting out the vote is not an easy task, it is usually undertaken by serious organizations: political advocacy groups, grassroots organizations, non-profits, and so on, which are typically made up of or are well-connected to lawyers and the politically experienced—people who are just as capable of designing legislation as politicians in Washington. Next, the question that goes before voters is not usually the full text of proposed legislation, but rather a summary in plain language created by public officials.[14] The full text is of course publicly available, online and elsewhere (caps on legislation length is in the realm of the possible too). While it is true that many voters will not read the full bill, the summary must accurately describe it. This functions just fine in the real world.

The United States already uses initiative rights and direct democracy to pass or reject legislation, at the city and state levels. It is legal in twenty-four states and Washington, D.C.[15] (Some, however, use indirect initiatives, which force a legislature to vote on citizen-crafted bills.) In the November 2016 election, 150 measures were on ballots throughout these states. California, Nevada, and Massachusetts voters legalized recreational marijuana use; Arizona, Colorado, Maine, and Washington raised their minimum wages; Nebraska restored the death penalty and Oklahoma made it harder to get rid of; Colorado legalized medically assisted suicide; California, Washington, and Nevada tightened gun laws. Voters in Arizona rejected recreational marijuana legalization; Maine shot down stricter gun control; California declined to abolish its death penalty; Oregon, Washington, Colorado, Missouri, and North Dakota rejected tax increases.[16] You won’t always get what you want. That’s democracy. But you will, no matter your beliefs, have a voice. Things will get done. No politicians gridlocked in committee. No representatives on the voting floor following the whims of their biggest donors. Just ordinary people creating real change for themselves, no representatives needed. “I’m a socialist,” one of H.G. Wells’ characters from In the Days of the Comet said. “I don’t think this world was made for a small minority to dance on the faces of every one else.”[17] The Canadian province of British Columbia and all German states also enjoy initiative rights.[18]

All this demonstrates, you’ll notice, that direct democracy works on a large scale. California is the most populous state in the nation, with nearly 40 million people in 2017. Florida, with nearly 21 million people, is up toward the top too. State direct democracy works well, and has since 1898, when South Dakota became the first state to adopt the initiative process.[19] A wide range of U.S. cities use it as well, and have since the town halls of colonial times. Direct democracy has existed in local government throughout human history, from the city-state of Athens, Greece, in the 5th century B.C. to Porto Alegre, Brazil, today.[20] Interestingly, since 1989, Porto Alegre, a city of over 1.5 million people, has allowed participatory budgeting. Citizens participate in the design of the annual city budget, and everyone has the right to vote to approve or strike down the finished product. Since this democratic idea, pushed forward by socialists, was enacted, funds have shifted dramatically to poorer, high-need areas of the city. The process is marked by transparency and lack of corruption.[21]

There are in fact countries that use pure democracy. Switzerland, a nation of eight million people, has had an initiative process at the federal level since 1891. Since then twenty-two initiatives have won out of over 200 proposals. The country also has a parliament that passes laws; it’s therefore called a semi-direct democracy (the people, however, can veto legislation parliament passes through the referendum process). Popular votes take place up to four times annually. In 2016, the populace rejected a law to give each citizen a guaranteed income. Changes to their constitution require majority support from the people and majority support from the cantons (states).[22] While the Swiss majority has at times passed prejudiced, oppressive laws, the Human Freedom Index, published by conservative and libertarian institutes, nevertheless ranks it as the freest nation in the world.[23] The Philippines and the European Union likewise have initiative rights.[24] There is no reason direct democracy cannot work at the national level. (If we were to consider the referendum process, in which legislatures craft laws and once every blue moon the people vote on them, we would have a very long list of participating nations, including some of the most populous in the world, such as Brazil, with 209 million people, and Bangladesh, with 165 million.[25])

Pure democracy is not a perfect system. Yet it gives the many the ability to address the problems we’ve explored elsewhere: to give workers ownership, to protect the planet, to reject war, to guarantee the rights and services people need, and so on. As Mark Twain once asked, “Why is it right that there is not a fairer division of the spoil all around? Because laws and constitutions have ordered otherwise. Then it follows that laws and constitutions should change around and say there shall be a more nearly equal division.”[26] This does not mean they will (the majority may vote for capitalism!), but the mechanisms make it possible. Changing hearts and minds so the system can be used to create a fully socialist society will be just as important.

The idea of broadening democracy raises an important question: how far should we go? If “power to the people” is the goal, what about electing Supreme Court justices and federal judges? Should we abolish the Electoral College and elect a president by popular vote? Give the people recall rights, which allow a supermajority to remove officials, from sheriffs to the president, from office? The answers will depend on how much we can empower the common person while maintaining effective checks and balances. The country’s hundreds of top judges and the nine justices today serve for life. Perhaps the people rather than representatives could approve them; perhaps they could be elected—but certainly not more than once, as we do not want them thinking about their next election when making rulings, and probably not for a short term, as there is value in having one branch, one check, that doesn’t change with the winds. The Electoral College is a vestige of slavery, and there is no explanation as to why the president should not be elected by popular vote (like every other elected official in the nation) that doesn’t collapse under the slightest weight of critical thinking.[27] Recall rights would be a fine way to keep public officials in line, but should perhaps only apply to some (department and agency heads, sheriffs) but not others (the president, justices). There are many ideas to explore and solutions to craft as we build socialism.

 

Notes

[1] London, “What Socialism Is”

[2] https://www.vox.com/policy-and-politics/2016/11/7/13536198/election-day-americans-vote; http://www.pewresearch.org/fact-tank/2016/03/04/half-of-those-who-arent-learning-about-the-election-feel-their-vote-doesnt-matter/

[3] Hugo, “Letter to the Poor”

[4] https://gsgriffin.com/2017/09/25/a-brief-history-of-american-socialism/

[5] https://gsgriffin.com/2017/09/25/a-brief-history-of-american-socialism/

[6] https://gsgriffin.com/2017/06/30/how-the-founding-fathers-protecting-their-riches-and-power/

[7] https://archive.org/stream/TheCrisisOfDemocracy-TrilateralCommission-1975/crisis_of_democracy_djvu.txt. Indeed, the Trilateral Commission’s 1975 Crisis of Democracy report warned that “some of the problems of governance in the United States today stem from an excess of democracy… Needed, instead, is a greater degree of moderation in democracy.” “Expertise, seniority, experience, and special talents,” the authors feel, should “override the claims of democracy” in many situations, claims that were growing louder during “the surge of the 1960s”; the “arenas where democratic procedures are appropriate are…limited,” so it would be unwise to, for example, have “a university where teaching appointments are subject to approval by students,” and presumably the same for citizen approval of national policy. Further, “apathy and noninvolvement” among some groups has “enabled democracy to function effectively,” as when “marginal social groups, as in the case of the blacks…[become] full participants” there is a “danger of overloading the political system with demands which extend its functions and undermine its authority…” Indeed, “Democracy is more of a threat to itself in the United States than it is in either Europe or Japan where there still exist residual inheritances of traditional and aristocratic values.” In sum, full and actual participation by the people leads to claims and demands, whether civil rights or universal healthcare, that can override the authority of the Establishment, the privileged and powerful. Democracy should therefore be checked.

[8] https://books.google.com/books?id=JHawgM-WnlUC&pg=PA218&lpg=PA218&dq=1856+north+carolina+last+state+to+remove+property+ownership&source=bl&ots=sgfKjGzhet&sig=y8ALKjDhkAr2LNvcO6cACsvzRaQ&hl=en&sa=X&ved=0ahUKEwi9_-rC3KXXAhUBYCYKHTxxBiEQ6AEIUzAI#v=onepage&q=1856%20north%20carolina%20last%20state%20to%20remove%20property%20ownership&f=false; https://gsgriffin.com/2017/06/30/how-the-founding-fathers-protecting-their-riches-and-power/

[9] Orwell, “Down and Out in Paris and London”

[10] https://www.britannica.com/topic/constitutional-law/Unicameral-and-bicameral-legislatures#ref384652

[11] Arthur Miller, Timebends: A Life, 1987

[12] https://www.govtrack.us/congress/bills/statistics

[13] https://sunlightfoundation.com/2014/01/16/congress-in-2013/#gplus

[14] The process varies by state. See Missouri’s process as an example: https://www.sos.mo.gov/CMSImages/Elections/Petitions/MakeYourVoiceHeard2018Cycle.pdf

[15] https://ballotpedia.org/States_with_initiative_or_referendum

[16] https://www.pbs.org/newshour/politics/ballot-initiatives-passed-marijuana-minimum-wage

[17] H.G. Wells, In the Days of the Comet (1906)

[18] https://en.wikipedia.org/wiki/Initiative

[19] http://www.ncsl.org/research/elections-and-campaigns/initiative-referendum-and-recall-overview.aspx

[20] https://www.ancient.eu/Athenian_Democracy/

[21] Wright, Envisioning Real Utopias, 155-160

[22] https://www.weforum.org/agenda/2017/07/switzerland-direct-democracy-explained/

[23] http://nationalinterest.org/feature/switzerland-the-ultimate-democracy-11219?page=2; https://object.cato.org/sites/cato.org/files/human-freedom-index-files/2017-human-freedom-index-2.pdf;

[24] https://en.wikipedia.org/wiki/Initiative

[25] https://en.wikipedia.org/wiki/Referendums_by_country#United_States

[26] https://fair.org/media-beat-column/the-twain-that-most-americans-never-meet/

[27] https://gsgriffin.com/2016/12/09/the-electoral-college-how-racist-white-slave-owners-made-your-vote-worthless/; https://gsgriffin.com/2016/12/09/ending-the-electoral-college-wont-lead-to-city-rule-or-dictatorship/

Guaranteed Income vs. Guaranteed Work

Living in a socialist society would mean awakening each workday and heading to your worker cooperative, while regularly visiting your voting place to help decide local and national policies. But it is more than that—and has to be. The State has a few important services to provide if the socialist dream of prosperity and dignity for all people is to be achieved.

What if, for instance, you cannot find a job? Just because all workplaces are democratic and share profits does not mean there will always be enough jobs when and where you need one. There is no room in a socialist nation for unemployment, poverty, homelessness, and so on, and thus some mechanism is needed to guarantee that we only see these horrors in history books. Every person, regardless of who you are or what work you do, should make enough to have a comfortable life—which requires a high minimum wage (required by law but inherent in worker ownership) and guaranteed access to an income. There are two paths forward to eradicating the horrors, stated succinctly by Dr. King: “We must create full employment, or we must create incomes.”[1] Guaranteed work or a guaranteed income. Either would be adequate, but there are positives and negatives of each to weigh.

Let’s first consider a guaranteed income, or universal basic income (UBI). All UBI entails is using tax revenue to send a regular check to each citizen, a simple redistribution of wealth to eradicate poverty and provide security during times of unemployment or underemployment. Its simplicity is a major advantage over guaranteed work.

UBI has been around for a while in various forms. Alaska has given $1,000-$2,000 a year to every resident without condition since 1982.[2] Hawaii may follow suit soon.[3] The Eastern Band of the Cherokee Nation launched its own UBI in 1996, and today gives $10,000 a year to each of its members, which has helped reduce behavioral problems and crime.[4] Iran from 2010 to 2016 had the world’s first national UBI, giving each family the equivalent of $16,300 a year.[5] For one year, 2011, Kuwait gave $3,500 to each citizen.[6] In 2017, Macau, a region of China, began giving over $1,100 a year to each permanent resident.[7]

Trials in some of India’s villages that began in 2011 show huge success in improving children’s education, access to food and healthcare, and the total number of new business startups.[8] Other past small-scale experiments were conducted in the U.S., Canada, Brazil, Namibia, and elsewhere. Models range from everyone getting the same amount to poorer recipients getting more while richer ones less (which even some conservatives support in the form of the Earned Income Tax Credit or even a negative income tax[9]). Studies indicate that when people have this financial security they spend more time taking care of family, more time focusing on education, and are able to win higher raises at work because they have a more serious option to leave, leverage they did not have before.[10] Contrary to myth, giving poor people cash tends to have no impact on or reduce alcohol and tobacco consumption, likely because paying for healthcare, education, and so forth is suddenly an option and people want to direct their resources there.[11] In 2017, experiments with UBI launched or were preparing to launch in various places in Finland, Canada, Kenya, Uganda, the Netherlands, Scotland, Spain, and the U.S.[12]

“A guaranteed annual income could be done for about twenty billion dollars a year,” Dr. King estimated in 1967. “If our nation can spend thirty-five billion dollars a year to fight an unjust, evil war in Vietnam, and twenty billion dollars to put a man on the moon, it can spend billions of dollars to put God’s children on their own two feet right here on earth.”[13] The question of priorities in spending is as relevant as ever. The cost of American UBI would depend on similar factors: how much would be guaranteed, if everyone would receive it (if the rich do not then it’s not technically UBI, but no matter), and so on. $10,000 a year for all 240 million U.S. adults is $2.4 trillion, $15,000 a year for the poorest 50 million people is $750 billion, etc. Of course, the net cost would be lower, as giving tens of millions or hundreds of millions of people greater purchasing power would put the economy into overdrive—that money would be spent, enriching co-ops and thus increasing State tax revenues (this is also why economic research overwhelming shows higher minimum wages do not lead to higher unemployment or prices; extra money is spent at businesses, boosting their profits, balancing the system out[14]). “People must be made consumers by one method or the other,” King said when discussing guaranteed income or work.[15] One study estimated giving each American adult $1,000 a month would grow the economy 12-13% over eight years, or by $2.5 trillion, if employment remained steady.[16] It is important to keep the cyclical nature of this system in mind while considering costs. UBI is expensive, but it also increases tax revenue.

Now, major concern exists that UBI will cause people to stop working, hurting the economy and leaving the worker-owners stuck supporting the easy lifestyle of the lazy. As we have seen, at some point in the human future automation will essentially make labor a thing of the past, highlighting the need for both collective ownership of the machines and State-provided incomes. So it seems obvious that at some point we will have to give up our agitation over people who do not work (rather, poor or middle income people who do not work; critics seem less concerned about the wealthy types who enjoy work-free lives). We won’t be able to absurdly base people’s value on how many hours they work or what sort of work they do. Everyone will spend their days as they see fit, some choosing to design skyscrapers (even though machines could do it for them) because they enjoy it, others doing nothing all day because they enjoy that more. But until machines can serve our every need, the point is a valid one, as some people will indeed prefer not to have a job, while supported by the labor of others. (On the positive side, there would be decreased competition for jobs for those seeking them.) This wouldn’t bother all worker-owners, but it would be reality. In five experiments on guaranteed income done in the U.S. and Canada, the decrease in the labor participation rate ranged from zero to 30%.[17] However, most studies show no effect or only a small decline.[18] Donald Rumsfeld and Dick Cheney ran UBI experiments in a few cities for President Nixon, and found work rates remained steady.[19] A study of Alaska found employment levels weren’t affected. A study of Iran’s UBI revealed some people worked a bit less, but some actually worked more.[20] India’s basic income grants led to more labor, as did Uganda’s.[21] Namibia saw no negative effects on labor participation.[22] Naturally, the decline depends on how much is received, but it is predictable that UBI will mean some people will choose not to work. Importantly, with so much to do to rebuild and maintain our society, is UBI yet wholly practical? Will enough citizens volunteer to participate in all the unpleasant tasks that make a society function, such as repaving roads or waste disposal, if a high income is guaranteed? Would necessary tasks remain undone because Americans would want to pursue other things? These nagging questions will spur some to throw out the whole idea, insist the monthly amount must be low enough to force people to get jobs, or propose a higher UBI for people willing to do unpleasant work. All told, UBI would have to be implemented strategically, perhaps beginning at a level that eradicates poverty and slowly increasing as humanity approaches the point where machines can take care of all undesirable duties.

Guaranteed work is a more complex system, but avoids the concerns associated with lower labor participation. In fact, there would be a job for all. “If Government in our present clumsy fashion must go on,” Ralph Waldo Emerson said in 1843, “could it not assume the charge of providing each citizen, on his coming of age, with a pair of acres, to enable him to get his bread honestly?”[23] In a society offering guaranteed work, federal tax revenue could be transferred to municipalities to create salaries for unemployed or underemployed people. City governments would use the funds to launch public work projects to improve their communities (what projects would be a local democratic decision, of course). So if a city has 50,000 people looking for work at the start of the year, it might receive $2 billion, to offer a $40,000 salary to each person. If the U.S. had 8 million unemployed, it would cost $320 billion to employ them—half our modern military budget. Prioritization is easy enough. Dr. King said, “If America does not use her vast resources of wealth to end poverty and make it possible for all of God’s children to have the basic necessities of life, she too will go to hell.”[24] As with UBI, however, broadening purchasing power will reduce the net cost through increased tax revenues.

Workers can be hired to rebuild our crumbling inner cities, install solar panels on homes, plant trees, tutor struggling students, spend time with neglected seniors—literally any task that betters society in some way. Because not all positive tasks require physical labor, the program would be inclusive of many persons with disabilities or even seniors who want to work (though obviously not intended to replace social security or disability insurance). Cities will need more funds than just those for salaries, however, sums dependent on the type of project. Some projects will be relatively cheap, like cleaning trash off the streets, others more expensive, like renovating a school. Extra funds could nevertheless be fixed to a city’s unemployment level. Using their allotted monies, cities could contract with local co-ops to supply equipment and raw materials for necessary ventures. Public workers would also receive help securing employment at a cooperative, where higher incomes, democracy, and ownership can be enjoyed, so that the public sector doesn’t continually grow. Rather than shrink the private sector, however, guaranteed work programs can actually expand it—fewer unemployed persons means more spenders, benefiting businesses and allowing them to expand.[25]

Co-ops could also receive federal funds, allowing them to take on more worker-owners. This needn’t be a permanent relationship. The State could fund a position for a year, giving a co-op time to absorb a new member. Cooperatives would get another worker, and thus greater productivity and more profits, for nothing, in return for guaranteeing the worker a permanent job and ownership after the year ended. Co-ops could further receive government contracts to do certain projects, as businesses do today, with increased employment stipulations. Alternatively, cities could organize unemployed persons into new cooperatives, helping fund the endeavor during the first few years, until it became self-sustaining (whether for-profit or nonprofit). If there was a need for greater production in a certain sector, from agriculture to social work, that need could be met with new co-ops.[26]

There is much precedent for guaranteed work. Generally speaking, employment by the State is something we take for granted. Critics of paying citizens to work often have no qualms over paying citizens to be soldiers or police officers. If one can be called necessary for protection, the other can be called necessary for poverty’s demise. Local governments across the U.S. employ 14.1 million people, over half of them in education, the rest in healthcare, fire and policing, financing and administration, transportation, library services, utilities, environment and recreation—and public works.[27] (States employ another 5 million, and the federal government employs over 2.5 million civilians and over 2 million active and reserved military personnel.[28]) More specifically, during the Great Depression, President Roosevelt’s Works Progress Administration, Civil Works Administration, and Civilian Conservation Corps hired some 15.5 million people to build roads, bridges, schools, hospitals, museums, and zoos; to garden, plant trees, fight fires, reseed land, save wildlife, and sew; to undertake art, music, drama, education, writing, and literacy projects. While not without challenges, public works saved many families from hunger, strengthened the consumer class and thus the economy, and beautified the country.[29] Roosevelt actually included “the right to a useful and remunerative job” in his 1944 Second Bill of Rights.[30] Similar federal initiatives have occurred since, such as the Comprehensive Employment and Training Act of the 1970s, which employed 750,000 people by 1978.[31] (In countless other programs, like the Public Works Administration of the 1930s, the U.S. government indirectly created jobs by paying businesses to tackle huge projects. Construction of the Interstate Highway System in the 1950s and 60s entailed the federal government funding the states, which either expanded their public workforces or contracted with private companies.) Today, cities like Reno, Albuquerque, Tempe, Fort Worth, Chicago, Denver, Portland, and Los Angeles offer jobs to the homeless to help them out of the social pit. Cities elsewhere in the world do the same.[32]

Governments around the world run programs similar to our New Deal. India is pouring billions into the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA), which gives, or rather tries to give, residents of a few poor, rural states one hundred days of guaranteed work annually.[33] 50 million households, 170 million people, are involved—the largest public works program in world history.[34] Other nations, especially in Europe, have made the government the employer of last resort at various times.[35] So have South Africa and Argentina. Argentina’s Jefes de Hogar program paid the heads of household with children, persons with disabilities, or pregnant women to do community service, construction, and maintenance work. 2 million Argentinians, 5% of the population, were employed at its height.[36] South Africa’s Expanded Public Works Program includes government jobs in infrastructure, tourism, environment, early childhood education, and more.[37] As in the U.S., local, state, and national governments around the world may not offer guaranteed work but do offer public works jobs. These efforts and countless others have dealt serious blows to unemployment and poverty. Wages even rise in the private sector, because it must compete with the public sector for workers. “We must develop a federal program of public works, retraining, and jobs for all,” Dr. King said, “so that none, white or black, will have cause to feel threatened.”[38]

One criticism of guaranteed work is that unemployment dropping too low will herald inflation. It is said if unemployment is eliminated then businesses will have to compete for fewer workers, driving wages up, which will drive up the cost of everything else to compensate, which will lead to higher wage demands, all in an unending upward wage-price spiral. This is not actually as grave a concern as one might imagine. First, the correlation between unemployment and inflation is not terribly strong: sometimes they move in opposite directions, sometimes they move together.[39] Mainstream economists are increasingly acknowledging the relationship is weak or nonexistent. It’s easy to see why more workers doesn’t necessarily mean higher prices. Increased profits from more consumers spending more money help firms absorb higher wage costs without raising prices. Again, even drastic increases in the minimum wage create only tiny increases in prices, making the wage increase plainly worth it.[40] To stay competitive there is every incentive for firms to expand production, and thus sales, or take a bite out of profits rather than raise prices on consumers. Many economists have argued persuasively that, contrary to William Phillips, Milton Friedman, and others, full employment can be achieved without inflation.[41]

Second, if upward wage pressure became so great it could not be absorbed, and prices rose, there is reason to predict this would be a brief phase, not an eternal spiral. It is not likely the upward pressure on wages would last. Say the public worker salary was set at $38,000 a year (we’ll say that is also the minimum wage). If you worked for a capitalist firm making $38,000, you would likely be able to convince the capitalist to give you a raise—otherwise you could leave, guaranteed to make the same in the public sector. You win a raise and are then making $40,000. But if you continue pushing over time, the potential loss due to ultimate failure (being let go, replaced by someone cheaper, someone from the public sector wanting to make more) rises—it’s at $2,000 now and will only get bigger.[42] So there is a disincentive that keeps higher wage demands down. The capitalist may get rid of you and you’ll be worse off financially than you were. A guaranteed job gives people more power and leverage, but not so much to create an inflationary disaster; with limits on the upward pressure of wages come limits on price increases, which tend to be tiny proportions of income increases anyway. At a cooperative, as raises are determined democratically, the majority would have to repeatedly vote to both give raises to all and to raise prices on consumers—this seems just as unlikely, perhaps more so, as a single capitalist continuously doing this.

Third, more production of goods and services through the public sector, like increased purchasing power, increases supply and thus pulls price down.[43] Fourth, various effective tactics the State uses to control inflation will still exist under socialism.[44] In practice, at least regarding partial guaranteed employment and public works ventures, skyrocketing inflation is a nonissue. The Reserve Bank of India found that the MGNREGA program did not raise food prices.[45] We know that Argentina’s inflation was extremely high in 2002, when its works program began, but declined and remained relatively low past 2007, when the program ended, until 2013.[46] South Africa’s ongoing program began in 2004; inflation grew by over 10% by 2009, during economic crisis, but then fell and remained low through 2018.[47] The four points above also answer concerns about UBI and inflation. Further, studies of Alaska, Kuwait, Lebanon, Mexico, India, and African nations have at least shown that a small UBI does not cause inflation.[48]

Whether UBI, guaranteed work, or a combination of both (guaranteed work followed by UBI, for example, so no one is stuck doing pointless work for a city while co-op members get rich off machines that can do all tasks) is implemented, one of these strategies will be necessary as a safety net for those struggling to find a job. With it we can eradicate need and want forever. “Overcoming poverty is not a task of charity, it is an act of justice,” Nelson Mandela said. “Like Slavery and Apartheid, poverty is not natural. It is man-made and it can be overcome and eradicated by the actions of human beings.”[49] Either system would have other significant effects on society, too, such as replacing many older forms of welfare, freeing people from the fear of quitting a job they do not enjoy, giving people greater ability to strike—a tactic that may not entirely disappear with worker ownership, as some worker-owners may be so opposed to a majority decision they walk out—and more.[50]

 

Notes

[1] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).

[2] https://motherboard.vice.com/en_us/article/jp5wdb/only-state-free-money-alaska

[3] https://www.vox.com/policy-and-politics/2017/6/15/15806870/hawaii-universal-basic-income

[4] http://edition.cnn.com/2015/03/01/opinion/sutter-basic-income/

[5] https://www.weforum.org/agenda/2017/05/iran-introduced-a-basic-income-scheme-and-something-strange-happened

[6] http://basicincome.org/news/2011/05/kuwait-a-temporary-partial-basic-income-for-citizens-only/

[7] http://basicincome.org/news/2017/07/wealth-partaking-scheme-macaus-small-ubi/

[8] https://mondediplo.com/2013/05/04income

[9] Milton Friedman, Free to Choose

[10] https://catalyst-journal.com/vol1/no3/debating-basic-income

[11] https://qz.com/853651/definitive-data-on-what-poor-people-buy-when-theyre-just-given-cash/; http://documents.worldbank.org/curated/en/617631468001808739/pdf/WPS6886.pdf

[12] http://basicincome.org/news/2017/10/overview-of-current-basic-income-related-experiments-october-2017/; http://time.com/money/5114349/universal-basic-income-stockton/

[13] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).

[14] https://gsgriffin.com/2016/12/08/the-last-article-on-the-minimum-wage-you-will-ever-need-to-read/

[15] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).

[16] https://www.weforum.org/agenda/2017/09/a-basic-income-could-boost-the-us-economy-by-2-5-trillion/

[17] https://catalyst-journal.com/vol1/no3/debating-basic-income

[18] https://www.vox.com/policy-and-politics/2017/8/30/16220134/universal-basic-income-roosevelt-institute-economic-growth

[19] https://qz.com/931291/dick-cheney-and-donald-rumsfeld-ran-a-universal-basic-income-experiment-for-president-richard-nixon/

[20] https://www.weforum.org/agenda/2017/05/iran-introduced-a-basic-income-scheme-and-something-strange-happened

[21] http://isa-global-dialogue.net/indias-great-experiment-the-transformative-potential-of-basic-income-grants/; http://www.unicef.in/Uploads/Publications/Resources/pub_doc83.pdf; https://medium.com/basic-income/evidence-and-more-evidence-of-the-effect-on-inflation-of-free-money-a3dcc2a9ea9e

[22] http://bignam.org/Publications/BIG_Assessment_report_08b.pdf

[23] http://books.google.com/books?id=04NPax82MZQC&pg=PA70&lpg=PA70&dq=ralph+waldo+emerson+socialism&source=bl&ots=9Cp_2uKvRI&sig=AfJfiT0oIr3L4XRC2hpxaA93sgs&hl=en&sa=X&ei=sZgkVMbKH4GUyATVlILoCQ&ved=0CDYQ6AEwBA#v=onepage&q=ralph%20waldo%20emerson%20socialism&f=false

[24] http://www.truth-out.org/progressivepicks/item/28568-martin-luther-king-jr-all-labor-has-dignity

[25] https://democracyjournal.org/magazine/44/youre-hired/

[26] Alec Nove, Essential Works of Socialism, 555

[27] https://www.cnsnews.com/news/article/terence-p-jeffrey/21955000-12329000-government-employees-outnumber-manufacturing; https://www.cbpp.org/research/some-basic-facts-on-state-and-local-government-workers

[28] https://www.cnsnews.com/news/article/terence-p-jeffrey/21955000-12329000-government-employees-outnumber-manufacturing; https://www.csmonitor.com/Business/Robert-Reich/2010/0813/America-s-biggest-jobs-program-The-US-military

[29] http://www.history.com/topics/works-progress-administration; http://www.history.com/topics/civilian-conservation-corps; https://www.britannica.com/place/United-States/The-Great-Depression#ref613079

[30] http://www.ushistory.org/documents/economic_bill_of_rights.htm

[31] https://www.vox.com/policy-and-politics/2017/9/6/16036942/job-guarantee-explained

[32] https://gsgriffin.com/2016/12/08/u-s-canadian-city-governments-ending-homelessness-by-offering-jobs/; http://www.newsweek.com/homeless-paid-clean-streets-texas-786311; https://www.azcentral.com/story/news/local/tempe/2017/10/16/tempe-hire-homeless-temporary-jobs-fight-mill-avenue/754199001/

[33] http://onlinelibrary.wiley.com/doi/10.1111/dpr.12220/full

[34] https://scroll.in/article/807379/why-2015-16-was-the-worst-year-ever-for-mgnrega; https://www.huffingtonpost.com/atul-dev/the-need-for-guaranteed-e_b_6295050.html

[35] https://www.bls.gov/opub/mlr/2000/10/art4full.pdf

[36] http://www.cfeps.org/pubs/wp-pdf/WP41-Tcherneva-Wray-all.pdf; http://www.levyinstitute.org/pubs/wp_534.pdf

[37] http://www.epwp.gov.za/; https://www.westerncape.gov.za/general-publication/expanded-public-works-programme-epwp-0; http://www.publicworks.gov.za/PDFs/Speeches/Minister/2016/Minister_EPWP_2016_Summit_closing_remarks_17112016.pdf

[38] http://neweconomicperspectives.org/2013/08/honoring-dr-kings-call-for-a-job-guarantee-program.html

[39] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.526.7530&rep=rep1&type=pdf; https://wwz.unibas.ch/fileadmin/wwz/redaktion/makrooekonomie/intermediate_macro/reader/7/02_ACFC7.pdf; https://www.researchgate.net/publication/46529582_The_Moral_Imperative_and_Social_Rationality_of_Government-Guaranteed_Employment_and_Reskilling

[40] https://gsgriffin.com/2016/12/08/the-last-article-on-the-minimum-wage-you-will-ever-need-to-read/

[41] http://www.redalyc.org/pdf/601/60124701.pdf

[42] https://poseidon01.ssrn.com/delivery.php?ID=718089001090029001080009094097103014067041034067091025005110119013116028124085065079106087021005066041048019022117117074085015103072012028116125001110102097111024001098096104064120025064&EXT=pdf; https://www.econstor.eu/bitstream/10419/31634/1/571704611.pdf

[43] https://poseidon01.ssrn.com/delivery.php?ID=718089001090029001080009094097103014067041034067091025005110119013116028124085065079106087021005066041048019022117117074085015103072012028116125001110102097111024001098096104064120025064&EXT=pdf

[44] https://www.economicshelp.org/blog/2269/economics/ways-to-reduce-inflation/

[45] https://economictimes.indiatimes.com/news/economy/indicators/mgnrega-has-not-contributed-to-food-inflation-report/articleshow/44903564.cms

[46] https://data.worldbank.org/indicator/FP.CPI.TOTL.ZG?end=2013&locations=AR&start=2000

[47] https://tradingeconomics.com/south-africa/inflation-cpi; http://www.epwp.gov.za/

[48] https://medium.com/basic-income/evidence-and-more-evidence-of-the-effect-on-inflation-of-free-money-a3dcc2a9ea9e; http://ubi.earth/inflation/

[49] http://www.mandela.gov.za/mandela_speeches/2005/050203_poverty.htm

[50] https://catalyst-journal.com/vol1/no3/debating-basic-income

For the Many, Not the Few: A Closer Look at Worker Cooperatives

After pointing out the authoritarian hierarchy of the capitalist workplace—the capitalist chief at the top wielding ultimate decision-making power and owning the wealth created by the workers—John Stuart Mill envisioned instead the “association of laborers themselves on terms of equality, collectively owning the capital with which they carry on their operations, and working under managers elected and removable by themselves.”[1]

Socialistic worker cooperatives are the humane alternative to capitalist businesses. In a worker cooperative, you become a company owner soon after being hired. All workers share equal ownership of the firm, from custodian to spokesperson. This translates to equality in power (all decisions are made democratically) and in wealth (company shares and incomes are the same for everyone). Just like that, the exploitation of labor by and authoritarian power of the greedy few are consigned to the dustbin of history, replaced by cooperation, equity, and democracy. Workers control their own destinies, deciding together how they should use the profits created by their collective labor, be it improving production through technology, taking home bigger incomes, opening a new facility, hiring a new worker, lowering the price of a service, producing something new, and all other conceivable matters of business.

With the disappearance of hierarchy and exploitation comes the elimination or great alleviation of other crimes of capitalism we’ve explored. When worker-owners invest in new technologies that increase productivity and require less human labor, they won’t fire themselves—they can make more money and/or work fewer hours, bettering their standard of living and spending more time with family or doing things they enjoy. They will not outsource their own jobs to Bangladesh, either. Their greater wealth will reduce poverty, their greater purchasing power easing the throes of recession and depression (as would less competition, were cooperatives to federate). If co-ops were adopted on a national or global scale, the stock market might disappear, or at least substantially change, as the workers might want to keep all the shares of their company. Transparency and democracy should make a firm less likely to commit the kinds of profit-driven abuses against people, planet, and peace, because there are more players influencing decisions; the wider the field, the less likely everyone would feel comfortable with, say, poisoning our biosphere to make a buck. This is not to say that laws prohibiting the production of vehicles that run on fossil fuels would be unnecessary. They would. Rather, it is simply to say there would be more room for dissent in a workplace and a greater chance of a more moral or safe alternative being adopted. Socialism is not a cure for all our problems, just many of them.

Some criticisms of worker cooperatives can be easily dismissed with simple philosophical and theoretical arguments. There’s the desire of capitalists and would-be capitalists to have all the power and hoard the wealth. Well, this is about being more ethical than that, having the empathy to support the common good, not selfish ends. As Dr. King said, “True compassion is more than flinging a coin to a beggar; it comes to see that an edifice which produces beggars needs restructuring.”[2] There’s the consternation at the thought of a majority of workers with little to no experience with a task overruling a worker with experience and knowledge of said task. What does the graphic designer know of welding processes and how to best use or improve them? How can we let younger, newer, brasher salespeople make policy for the veteran salesperson? Well, first, it’s important to acknowledge that both fresh blood and odd ideas from outside a field can at times prove beneficial, a spark of innovation and positive change. Second, many worker cooperatives make it a point to train all workers in multiple or all areas of the business, lessening the knowledge gap with education, training, and staff development. Some even rotate jobs! (On-the-job training and shared knowledge is a key factor for success in co-ops where most founders have no business experience.[3]) Third, a cooperative environment encourages workers to listen carefully to those with greater experience, knowing that deference will be reciprocated later. Fourth, most business decisions, if found to be ineffective or harmful, can be reversed before a total collapse of the company, just like in business today. Lastly, even if a shortsighted, unknowledgeable majority ran the cooperative—their cooperative—into the ground because they stubbornly refused to listen to the wisdom of the experts, there is nevertheless something satisfactory about the democratic nature of this failure. Under capitalism, the stupidity of a single capitalist can destroy a business, wiping out jobs for everyone. Under socialism, the workers democratically determine their own destiny. It may be a disaster, but it’s your disaster, collectively speaking. But, as we will see, cooperatives are in no way more likely to fold.

Cooperative work is as old as humanity itself, as we have seen. Worker cooperatives in their modern form have existed around the world since the Industrial Revolution began and capitalism took off, that is, before Marx’s writings.

The U.S. has a rich history of cooperative enterprises that continues to this day.[4] No, they are not always perfect. While some exemplify precisely the socialist vision, others could be more egalitarian or democratic (for example, many make use of elected managers or executives with slightly larger salaries, which can be easier with larger companies; others are too slow at granting ownership rights). But they are all a giant step up from capitalist firms. The U.S. has an estimated 300-400 cooperatives, everything from the 4th Tap Brewing Co-Op in Texas to Catamount Solar in Vermont, employing 7,000 workers (the average size is 50 people) and earning $400 million in revenue each year. (If you’ve heard it’s more like tens of thousands of cooperatives making billions, such inflated numbers are only possible by including credit unions, “purchasing co-ops,” independent farmers aiding each other through “producer co-ops,” Employee Stock Ownership Plans, and other structures that, while valuable, don’t exactly qualify.) 26% of them used to be capitalist-structured businesses.[5] Converting is a great way to preserve a business and protect people’s livelihoods; when small business capitalists retire, the vast majority of the time they do not find a buyer nor are able to pass ownership on to family, so the enterprise simply ends and workers are thrown out.[6] Cooperatives represent all economic sectors, and have annual profit margins comparable to top-down businesses—the idea that they are less efficient is a myth (not that efficiency has to be more important than democracy and equality anyway). 84% of the workers are owners at a given time.[7] Many firms are members of the U.S. Federation of Worker Cooperatives, a growing organization. Because people are put before profits, most cooperatives have a particular focus on community improvement and development, for example the Evergreen Cooperatives in Ohio. One study found food co-ops reinvest more money from each dollar in the local economy.[8]

America’s largest co-op, the Cooperative Home Care Associates in New York, has grown to 2,300 employees, about half of which are owners (to become an owner one pays $1,000 in installments). It is 90% owned by minority women. With $64 million in profits in 2013, the CHCA provides wages of $16 an hour (twice the market rate), a highest- to lowest-paid worker ratio of 11:1, flexible hours, and good insurance. Its governing board is elected; profits are shared. The company has a turnover rate that is a quarter of the industry standard. Some workers left behind minimum wage jobs and are now making $25 an hour. People say they stay because the co-op lifted them out of poverty and as owners they have decision-making power.[9] Ralph Waldo Emerson wrote in The Conduct of Life (1860), “The socialism of our day has done good service in setting men to thinking how certain civilizing benefits, now only enjoyed by the opulent, can be enjoyed by all.”[10] People who join the Women’s Action to Gain Economic Security (WAGES) co-ops in California see their incomes skyrocket 70-80%.[11]

As one might expect, workers are more invested in a company when they are also owners, which translates into better business outcomes. Though they are not without challenges, a review of the extant research reveals co-ops have the same or greater productivity and profitability than conventional businesses, and tend to last longer; workers are more motivated, satisfied, and enjoy greater benefits and pay (with no evidence of increased shirking), information flow improves, and resignations and layoffs decline.[12] They are more resilient during economic crises.[13] Many studies come from Europe, where cooperatives are more widespread and more data has been collected. In Canada, worker cooperatives last on average four times longer than traditional businesses.[14] Their survival rates are 20-30% better.[15] Research on France’s cooperatives revealed that worker-owned enterprises were more productive and efficient, and over a four-year period cooperative startups actually outnumbered capitalistic startups.[16] French capitalist-turned-cooperative businesses have better survival rates than capitalist businesses by significant margins, 10-30%.[17] Analyzing cooperatives across the U.K., Canada, Israel, France, and Uruguay, one study found that cooperatives had similar survival rates to traditional businesses over the long term, but better chances of making it through the crucial early years. Italy and Germany experience the same.[18] Italian co-ops are 40% more likely to survive their first three years; Canadian co-ops about 30% more likely in the first five years and 25% more likely in the first ten years; in the U.K., twice as many cooperatives survive the first five years than traditional firms.[19] In Italy’s Emilia Romagna region, an economic powerhouse of that nation and Europe, two-thirds of residents belong to worker cooperatives.[20] In Spain, a study of a retail chain that has both top-down stores and cooperative ones revealed the latter have much stronger sales growth because worker-owners have decision-making power and a financial stake.[21] In the U.S., much research has been done on businesses with Employee Stock Ownership Plans, which are called “employee-owned” because employees are given stock, but most are not democratic nor totally owned by the workers (Publix and Hy-Vee are examples). ESOPs are only one-third as likely to fail compared to publicly traded businesses, suffer less employee turnover, and are more productive.[22] One rare study on American plywood worker cooperatives found they were 6-14% more efficient in terms of output than conventional mills.[23] When the economy declined, conventional mills attacked worker hours and employment, whereas the worker-owners agreed to lower their pay to protect hours and jobs.[24] Given the benefits of worker cooperatives, places like New York City, California, and Cleveland are investing in their development, recognizing their ability to lift people out of poverty and thus strengthen a consumer economy, plus offer an opportunity to focus on alleviating systemic barriers to work and wealth that minorities, former felons, and others face in the United States.[25] This is no small matter. The egalitarian structure and spirit of solidarity inherent in co-ops can help win equality for the oppressed and disadvantaged. While perfect by no means, women tend to have more equitable pay and access to more prestigious positions in co-ops.[26] 60% of worker-owners in new American co-ops in 2012 and 2013 were people of color.[27] 90% of worker-owners at one of Spain’s co-ops are people with disabilities.[28] Italian cooperatives are more likely to hire folks who have been unemployed for long periods, often a major barrier to work.[29]

Spain has one of the strongest cooperative enterprises, no surprise to those who know Spain’s Marxist history.[30] (In the 1930s, George Orwell marveled at Barcelona, writing that his visit “was the first time that I had ever been in a town where the working class was in the saddle. Practically every building of any size had been seized by the workers… Every shop and cafe had been collectivized… Waiters and shop-walkers looked you in the face and treated you as an equal.”[31]) Mondragon Cooperative Corporation is a federation of over one hundred socialistic workplaces around the globe and in many economic sectors, from retail to agriculture. It is one of Spain’s largest corporations and the largest cooperative experiment in the world, with over $10 billion in annual revenue and 74,000 workers. Those who are worker-owners have shares of the business and the ability to run for a spot in the General Assembly, the federation’s democratic body of power, which elects a Governing Council. However, each cooperative is semi-autonomous, having its own, smaller democratic body. The manager-worker pay ratio is capped at 6:1.[32] In rough economic times, worker-owners decide democratically how much their pay should be reduced or how many fewer hours they should work, and managers take the biggest hits. This stabilizes an entity during recession, avoiding layoffs. So does job rotation and retraining. Further, Mondragon has the ability, as a federation, to transfer workers or wealth from successful cooperatives to ones that are struggling.[33] Due to these flexibilities, Mondragon cooperatives going out of business is nearly unheard of. When it does happen, the federation finds work for the unlucky workers at other member co-ops.[34] During the Great Recession, Mondragon’s number of workers held steady, and the Spanish county where it is headquartered was one of the least troubled.[35] The enterprise, however, has major faults. It actually owns more subsidiary companies than cooperatives—capitalistic, exploitive businesses in poor countries where workers are not owners. Also egregious: less than half of all Mondragon employees are actually owners.[36] Nevertheless, the business is a step in the right direction, indicating socialistic workplaces can function large-scale. (In fact, on average co-ops tend to have more employees that top-down firms.[37]) Mondragon is a member of the International Co-operative Alliance, the leading global association for the movement.

There are 11.1 million worker-owners worldwide.[38] When we include folk who work for cooperatives but are not owners, our total rises to 27 million.

 

Notes

[1] Mill, Principles of Political Economy

[2] King, “Beyond Vietnam,” April 4, 1967, New York City Riverside Church

[3] http://www.aciamericas.coop/IMG/pdf/CWCF_Canadian_SSHRC_Paper_16-6-2010_fnl.pdf

[4] Curl, For All the People

[5] http://institute.coop/what-worker-cooperative

[6] http://www.sfchronicle.com/business/article/Employee-ownership-may-help-businesses-stay-open-10941974.php

[7] http://institute.coop/sites/default/files/resources/State_of_the_sector_0.pdf

[8] https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[9] http://www.commondreams.org/views/2014/08/15/how-americas-largest-worker-owned-co-op-lifts-people-out-poverty

[10] Emerson, The Conduct of Life

[11] https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[12] https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1098&context=econ_las_workingpapers; https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[13] http://storre.stir.ac.uk/handle/1893/3255#.Wm-XeJM-fsE; https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[14] http://inthesetimes.com/article/17061/a_co_op_state_of_mind

[15] http://www.co-oplaw.org/special-topics/worker-cooperatives-performance-and-success-factors/

[16] https://www.thenation.com/article/worker-cooperatives-are-more-productive-than-normal-companies/

[17] http://www.co-oplaw.org/special-topics/worker-cooperatives-performance-and-success-factors/

[18] http://www.co-oplaw.org/special-topics/worker-cooperatives-performance-and-success-factors/

[19] https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[20] https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[21] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1849466

[22] http://www.co-oplaw.org/special-topics/worker-cooperatives-performance-and-success-factors/

[23] https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1098&context=econ_las_workingpapers

[24] https://books.google.com/books?hl=en&lr=&id=5qcPK0MuCXQC&oi=fnd&pg=PA462&dq=%22worker+cooperatives%22&ots=rVtg5rB4Fs&sig=6OXh-j6-MTTcrYJgbIgcvvrgma4#v=onepage&q=%22worker%20cooperatives%22&f=false

[25] https://www.thenation.com/article/worker-cooperatives-are-more-productive-than-normal-companies/; https://apolitical.co/solution_article/clevelands-cooperatives-giving-ex-offenders-fresh-start/; https://www.thenation.com/article/meet-the-radical-workers-cooperative-growing-in-the-heart-of-the-deep-south/

[26] http://www.geo.coop/node/615; https://www.thenews.coop/119294/sector/worker-coops/co-operatives-ensuring-no-one-left-behind/

[27] https://tcf.org/content/report/reducing-economic-inequality-democratic-worker-ownership/

[28] https://www.thenews.coop/119294/sector/worker-coops/co-operatives-ensuring-no-one-left-behind/

[29] https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1098&context=econ_las_workingpapers

[30] https://gsgriffin.com/2017/07/03/socialismo-the-marxist-victories-in-spain/

[31] Orwell, “Homage to Catalonia”

[32] http://www.yesmagazine.org/new-economy/world-s-largest-federation-of-worker-owned-co-operatives-mondragon-josu-ugarte

[33] https://www.theguardian.com/world/2013/mar/07/mondragon-spains-giant-cooperative; Putting Democracy to Work, by Frank Adams and Gary Hansen, p. 145

[34] http://www.yesmagazine.org/new-economy/world-s-largest-federation-of-worker-owned-co-operatives-mondragon-josu-ugarte

[35] http://www.northeastern.edu/econpress/2016/11/16/mondragon-economic-democracy-in-the-startup-age/; https://www.theguardian.com/world/2013/mar/07/mondragon-spains-giant-cooperative

[36] http://www.northeastern.edu/econpress/2016/11/16/mondragon-economic-democracy-in-the-startup-age/; https://www.theguardian.com/world/2013/mar/07/mondragon-spains-giant-cooperative

[37] https://poseidon01.ssrn.com/delivery.php?ID=630093101127024071122112080068014068031053050050057049071105017072103089077103089094028058042052005023061081018000001123015079014012043035035115110105126071030118028095082080068011004095110081113065023069089126092123117096125095075072112084120095119024&EXT=pdf

[38] http://www.cicopa.coop/Second-Global-Report-on.html (p. 25, Table 1). If we add in people who are self-employed but members of “producer cooperatives” that support them (farmers and fishermen, for instance, especially in Asia), 280 million people are involved in cooperative employment. Bringing these workers into the analysis would also swell the U.S. numbers mentioned earlier.

Yes, Evolution Has Been Proven

Evolution is a simple idea: that over time, lifeforms change. In a small timespan, changes are subtle yet noticeable; in a massive one, changes are shockingly dramatic — descendants look nothing like their ancestors, becoming what we call new species.

Changes occur when genes mutate during the imperfect reproduction process, and are passed on if the mutation helps an individual creature escape predators, find food or shelter, or attract a mate, allowing it to more successfully reproduce than individuals without its new trait (natural selection). Some mutations, of course, hurt chances of survival or have no impact at all.

Naturalist and geologist Charles Darwin provided evidence for this idea in his 1859 book On the Origin of Species and other works, and over the century and a half since, research in multiple fields has consistently confirmed Darwin’s idea, irreparably damaging religious tales of the divine creation of life just as it exists today.

  

The Myths of Man

While many people of faith have adopted scientific discoveries such as the age of the earth and evolution into their belief systems, many have not. Hardline Christian creationists still believe humans and all other life originated 6,000 years ago, with a “Great Flood” essentially restarting creation 4,000 years ago, with thousands of “kinds” of land animals (tens of thousands of species) rescued on Noah’s ark. 

The logical conclusion of the story is utterly lost on believers. There are an estimated 6.5 million species that live on land today, perhaps 8-16 million total species on Earth (that’s a conservative estimate; it could be 100 million, as most of our oceans remain unexplored). People have cataloged 2 million species, discovering tens of thousands more each year. Put bluntly, believing that in four millennia tens of thousands of species could become millions of species requires belief in evolution at a pace that would make Darwin laugh in your face.

To evolve the diversity of life we see today, much time was needed. More than 4,000 years, a planet older than 6,000 years. We know the Earth is 4.5 billion years old because radioactive isotopes in terrestrial rocks (and crashed meteors) decay at consistent rates, allowing us to count backward. Fossil distribution, modern flora and fauna distribution, and the shape of the continents first indicated the continents were once one, and satellites proved the continents are indeed moving apart from each other at two to four inches per year, again allowing us to count backward (Why Evolution is True, Jerry Coyne). When we do so, we do not stop counting in the thousands.

Naturally, criticisms of myths can be waved away with more magic, which is why it’s mostly futile to tear them apart, something I learned after wasting time doing so during my early writing days. Perhaps God decided to make new species after the flood. Perhaps he in fact made millions of species magically fit on a boat roughly the size of a football field, like a bag from Harry Potter. It’s the same way he got pairs of creatures on whole other continents to, and later from, the Middle East; how one family, through incest, rapidly evolved into multiple human races immediately after the flood (or did he make new human beings, too?); how a worldwide flood and the total destruction of every human civilization left behind no evidence. The power of a deity — and our imagination — can take care of such challenges to dogma. But it cannot eviscerate the evidence for evolution. Science is the true arrow in mythology’s heel.

Still, notions of intelligent design bring up many curious questions, such as why a deity would so poorly design, in identical ways, the insides of so many species (see below), why said deity would set up a world in which 99% of his creative designs would go extinct, and so on.

It seems high time we set aside ancient texts written by primitive Middle Eastern tribes and listened to what modern science tells us. And that’s coming from a former creationist.

 

It Wasn’t Just Darwin

74596-120-F4F7C75F.jpg

Charles Darwin, 1809-1882. via Britannica

Creationists attempt to discredit evolution by attacking the reliability and character of Darwin, but forget he was just one man. Darwin spent decades gathering the best evidence for evolution of his day, showed for the first time its explanatory powers across disciplines (from geography to embryology), and brought his findings to the masses with his accessible books. But there were many who came before him that deepened our and his understanding of where diverse life came from and how the biblical Earth wasn’t quite so young. For example:

  • In the sixth century B.C., the Greek philosopher Anaximander studied fossils and suggested life began with fishlike creatures in the oceans.
  • James Hutton argued in the 1700s that the age of the earth could be calculated based on an understanding of geologic processes like erosion and the laying down of sediment layers.
  • In 1809, Jean-Baptiste Lamarck theorized that physical changes to an individual acquired during its life could be passed to offspring (a blacksmith builds strength in his arms…could that lead to stronger descendants?).
  • By the 1830s, Charles Lyell was putting Hutton’s ideas to work, measuring the rate at which sediments were laid, and counting backward to estimate Earth’s age.
  • Erasmus Darwin, Charles’ grandfather, suggested “all warm-blooded animals have arisen from one living filament,” with “the power of acquiring new parts…delivering down those improvements by generation.”
  • Alfred Wallace theorized natural selection independently of and at the same time as Charles Darwin!

In other words, if it wasn’t Darwin it would have been Wallace. If not Wallace then someone else. Like gravity or the heliocentric solar system, the scientific truth of evolution could not remain hidden forever.

Creationists also seize upon Darwin’s unanswered questions and use them to argue he “disproved” or “doubted” the validity of his findings. For example, Darwin, in his chapter on “Difficulties of the Theory” in The Origin of Species, said the idea that a complex eye “could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree.”

Emphasis on seems. He went on to say:

When it was first said that the sun stood still and the world turned round, the common sense of mankind declared the doctrine false… Reason tells me, that if numerous gradations from an imperfect and simple eye to one perfect and complex, each grade being useful to its possessor, can be shown to exist, as is certainly the case; if further, the eye ever slightly varies, and the variations be inherited, as is likewise certainly the case; and if such variations should ever be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, cannot be considered real.

In other words, the evolution of eye is possible and there is no real difficulty in supposing this given other evidence he had found. Darwin knew he was not the end of the line. He made predictions concerning future discoveries, and supposed that other scientists would one day show how eyes could develop from non-existence to simple lenses to complex eyes, as they indeed have. It began with cells that are more sensitive to light than others. Biologists believe, in the words of Michael Shermer (God Is Not Great, Hitchens), that there was

Initially a simple eyespot with a handful of light-sensitive cells that provided information to the organism about an important source of the light; it developed into a recessed eyespot, where a small surface indentation filled with light-sensitive cells provided additional data on the direction of light; then into a deep recession eyespot, where additional cells at greater depth provide more accurate information about the environment; then into a pinhole camera eye that is able to focus an image on the back of a deeply-recessed layer of light sensitive cells; then into a pinhole lens eye that is able to focus the image; then into a complex eye found in such modern mammals as humans.

Earth has creatures with no eyes, creatures with “a handful of light-sensitive cells,” and all the other stages of eye development, right up to our complex camera eye. Given this, there is no reason to believe the evolution of the eye is impossible. As creatures evolved from lower lifeforms, there were slight variations in their ability to detect light, which proved useful for many, which helped creatures survive, which passed on the variations to offspring. This is how life can go from simple to complex over the generations. See The Evidence for Evolution, Alan Rogers, pp. 37-49, for a detailed study.

While the natural process has yet to be observed by humans — it takes eons, after all — we are able to create computer models that mimic beneficial mutations. Dan-Eric Nilsson and Susanne Pelger at Lund University in Sweden, for instance, made a simulation wherein a group of light-sensitive cells on top of a retina experienced random mutations in the tissues around them. The computer was programmed to keep mutations that improved vision in any way, no matter how small. So when the tissue pulled backward, for example, forming a “cup” for the primitive eye, this was preserved because it was an improvement. After 1,829 mutations (400,000 years), the simulation had a complex camera eye (Coyne). Computer models are a great tool for showing how evolution works. Simulations aren’t programed to build something complex, only to follow the simple laws of natural selection. Check out Climbing Mount Improbable by Richard Dawkins for more.

 

Strange Coincidences

TetrapodLimb.jpg

Homologous limbs. via University of California Museum of Paleontology

While the study of homologous structures is fascinating, most won’t impress creationists. Humans, bats, birds, whales, and other creatures all have a humerus, radius, ulna, carpals, metacarpals, and phalanges in their forelimbs, with simple variations in size and sometimes number, suggesting they are related via a common ancestor yet have changed, evolved. But the creationist can simply say a sensible deity created them with similar structures. 

Yet there are some coincidences and oddities that no serious person would call intelligent design, and in fact scream common ancestry.

Modern whales have tiny leg bones inside their bodies that are detached from the rest of the skeleton. We humans have three muscles under our scalps that allow some of use to wiggle our ears, which do nothing for our hearing but are the precise same muscles that allow other animals to turn their ears toward sounds. Goosebumps, now worthless, are vestiges of an era when our ancestors had fur. Our sinus cavities, behind our cheeks, have a drainage hole on top — our ancestors walked on all fours, and thus the location made sense, allowing better drainage. Cave salamanders have eyes but are totally blind. Koalas, which spend most of their time in trees, have pouches for their young that open up-side-down — their ancestors were diggers on the ground, so this was useful to protect young from dirt and rock thrown about, but now threatens to allow koala cubs to plunge from trees (The Greatest Show on Earth, Richard Dawkins).

Even more astonishing, within the neck of Earth’s mammals, the vagus/recurrent laryngeal nerve, instead of simply going the short distance from your brain to your voicebox, extends from the brain, goes down into your chest, twists around your aortic arch by the heart, and then travels back up to the voicebox! It’s three times longer than necessary.

Incredibly, this same lengthy, senseless detour is found in other mammals, even the towering giraffe, in which it is fifteen to twenty feet longer than needed (see evolutionist Richard Dawkins cut one open and look here). In fish, which evolved earlier than us, the nerve connects the brain to the gills in a simple, straightforward manner (Coyne). This indicates our common ancestors with fish did not have this issue, but our common ancestors with other, later species did. As our mammalian ancestors evolved, the nerve was forced to grow around other developing, growing, evolving structures.

Human males have another interesting detour. As explained by Dawkins, the vans deferens, the tube that carries sperm from testes to penis, is also longer than necessary — and indeed caught on something. The vans deferens leaves the testes, travels up above the bladder and loops around the ureter like a hangar on a department store rack. It then finally finds its target, the seminal vesicle, which mixes secretions with the sperm. Then the prostate adds more secretions, finalizing the product (semen), which ejaculates via the urethra. The vans deferens could go straight to the seminal vesicle (under instead of around the bladder and ureter), but it doesn’t.

This same trait is found in other male mammals, like pigs. Creatures like fish again do not have this mess. Our ancestors had testes within the body, like many modern species, and as they descended toward the scrotum, toward the skin for cooler temperatures, the wiring got caught on the ureter. Perhaps one could see an intelligent (?) designer having to jam some things together to make them work — a detour for the van deferens here, another for the recurrent laryngeal nerve there — in one species. But in mammals across the board? How does that make more sense than all this being the imperfect byproduct of mindless evolution over time?    

Recurrent_laryngeal_nerve-for_web

via Laryngopedia

1461403451_the-ductus-deferens.png

via Anatomy-Medicine

 

 

 

 

 

 

And it doesn’t end there. Vertebrates (species that have a backbone) like us happen to have eyes with retinas installed backward. Rogers writes:

The light-sensitive portion of the retina faces away from the light… The nerves, arteries, and blood vessels that serve each photocell are attached at the front rather than the back. They run across the surface of the retina, obscuring the view. To provide a route for this wiring, nature has poked a hole in the retina, which causes a substantial blind spot in each eye. You don’t notice these because your brain patches the image up, but that fix is only cosmetic. You still can’t see any object in the blind spot, even if it is an incoming rock.

But cephalopods (squid, octopi, and other advanced invertebrates) have a more sensible set-up, with wiring in the back (Rogers). Guess what kind of creature appeared on this planet first? Yes, the invertebrates. These coincidences and bad engineering suggest that as life evolved to be more complex there were greater opportunities for messy tangles of innards.

The best creationists can do is declare there are good reasons for these developments, that evolutionists “fail to demonstrate how this detour…disadvantages the male reproductive system” for example, which is completely beside the point. There were indeed biological reasons behind the development of these systems, which served as an advantage, not a hindrance (breaking the vans deferens or recurrent laryngeal nerve to let other organs grow and evolve would not be good for survival). The point is that if some species share this trait, it hints at a common ancestor.

So does embryology, the study of development in the womb. The field of genetics, which we explore further in the next section, helped us discover dead genes or pseudo genes in lifeforms. These are genes that are usually inactive but carry traits that, if developed, would be viewed as abnormal. In light of evolution it makes sense that we still have them. And sometimes dead genes wake up.

Humans have just under 30,000 genes, with over 2,000 of them pseudo genes. We have dead genes for growing tails, for instance. We all have a coccyx, four fused vertebrae that make up the end of our spine — four vertebrae that are larger and unfused in primates, thus being the base of their tails (Coyne). Not only are some humans born with an extensor coccygis, the muscle that moves the tail in primates but is worthless in us due to our vertebrae being fused, some people are born with a tail anywhere from one inch long to one foot! It has to be surgically removed.

Balaji

Arshid Ali Kahn, born in India in 2001, was worshiped as a reincarnation of the Hindu monkey god Hunaman. He had his tail removed in 2015. via Mirror

In fact, all human embryos begin with a fishlike tail, which is reabsorbed into the body around week seven. We develop a worthless yolk sac that is discarded by month two, a vestige of reptilian ancestors that laid eggs containing a fetus nourished with yolk. We develop three kidneys, the first resembling that of fish, the second resembling that of reptiles; these are also discarded, leaving us with our third, mammalian version. From month six to eight, we are totally covered in a coat of hair (lanugo) — primates develop their hair at the same stage, only they keep it. These marvels exist in other life, too. Horse embryos begin with three-toed hooves, then drop to one; they descended from creatures with more than just one toe. Occasionally, a horse is born with more than one hoof, or toe, on each foot (polydactyl horse), similar to its ancestors. Birds carry the genes necessary to grow teeth, minus a single vital protein; they descended from reptiles with teeth. Dolphin and whale embryos have hindlimb buds that vanish later; baleen whale embryos begin to develop teeth, then discard them (Coyne).

lanugo-bebe-vello-espalda-hombros

Premature infants still have some of their lanugo coat. They will soon lose it. via Mipediatra

It should also be noted that people with hypertrichosis are covered in fur like other primates — perhaps the reactivation of a “suppressed ancestral gene. In the course of evolution genes causing hair growth have been silenced and the appearance of hair in healthy humans can be explained by an erroneous reactivation of such genes.”

maxresdefault

Supatra “Nat” Sasuphan, who has hypertrichosis, is the Guinness Book of World Records holder for hairiest person. via Fox News

Quite interesting that God would give us genes to grow tails and fur.

Our fetal development, you likely noticed, actually mimics the evolutionary sequence of humanity. This is most noticeably true with our circulatory system, which first resembles that of fish, then that of amphibians, then that of reptiles, then finally develops into our familiar mammalian circulatory system (Coyne). Strange coincidences indeed.

But there are more. As one would expect if evolution occurred, fossils of creatures found in shallower rock more closely resemble species living today; fossils found in deeper, older sedimentary layers are more different than modern life. This pattern has never been broken by any fossil discovery, and supports Darwin’s idea (Coyne).

Similarly, consider islands. The species found on islands consistently resemble those on the nearest continent. This at first does not sound surprising, as one would predict that life (usually birds, insects, and plant seeds) that colonized islands would do so from the closest landmass. But the key word is “resemble.” What we typically see are a few species native to a continent (the ancestors) and an explosion of similar species on the nearby islands (the descendants). Hawaii has dozens of types of honeycreepers (finches) and half the world’s 2,000 types of Drosophila fruit flies; Juan Fernandez and St. Helena are rich in different species of sunflower; the Galapagos islands have 14 types of finches; 75 types of lemurs, living or extinct, have been documented on Madagascar, and they are found nowhere else; New Zealand has a remarkable array of flightless birds; and Australia has all the world’s marsupials, because the first one evolved there. To the evolutionist, a tight concentration of similar species on islands (and individual islands having their own special species) is the result of an ancestral explorer from a nearby landmass whose descendants thrived in a new environment unprepared for them (a habitat imbalance), reproducing and evolving like crazy. Thus a finch on a continent has a great number of finch cousins on nearby islands — like her but not the same species (Coyne). Darwin himself, still a creationist at the time, was shocked by the fact that each island in the Galapagos, most in sight of each other, had a slightly different type of mockingbird (Rogers).

To the creationist, God simply has an odd affinity for overkill on islands.

 

Shared DNA

In the 20th century, geneticists like Theodosius Dobzhansky synthesized Darwin’s theory with modern genetics, showing how the random, natural mutation of genes during the copying of DNA changes the physiology of lifeforms (should that altered state help a creature survive, it will be passed on to offspring). The study of DNA proved once and for all that Darwin was right. By mapping the genetic code of Earth’s lifeforms, scientists determined — and continue to show — that all life on Earth shares DNA.

DNA is passed on through reproduction. You get yours from your parents. You share more DNA with your parents and siblings than you do with your more distant relatives. In the same way, humans share more DNA with some living things than with others. We share 98% with chimps, 85% with zebra fish, 36% with fruit flies, and 15% with mustard grass. By share, we mean that 98% of DNA base pairs (adenine, guanine, cytosine, and thymine) are in the precise same spot in human DNA compared to chimp DNA. (These four nucleobases can be traded between species. There is no difference between them — we’re all made of the same biochemical stuff.) 

It is not surprising that creatures similar to us (warm-blooded, covered in hair, birth live young, etc.) are closer relatives than less similar ones. It’s no coincidence that apes look most like us and share the most DNA with us (and are able to communicate most directly with us, with one of our own languages, learning and holding entire conversations in American Sign Language). Evolutionary biologists used to use appearance and behaviors (such as gills or reproductive method) to suppose creatures were related, like the trout and the shark or the gorilla and the human being. But DNA now confirms the observations, as trout DNA is more similar to shark DNA than, say, buffalo DNA, and gorilla DNA is more similar to human DNA than, say, fruit fly DNA. 

But all life shares DNA, no matter how different (for a deeper analysis, see Rogers pp. 25-31, 86-92). That simple truth proves a common forefather. A god would not have to make creations with chimp and human DNA nearly the same, all the nucleobases laid out in nearly the same order; why do so, unless to suggest that evolution is true? When mapped out by genetic similarity, we see exactly what Darwin envisioned: a family tree with many different branches, all leading back to a common ancestor.  

tree-of-life_2000

Our tree of life. Click link in text above to zoom. via Evogeneao

 

Transitional Forms

Darwin predicted we would find fossils of creatures with transitional characteristics between species, for example showing how lifeforms moved from water to land and back again. Unfortunately, the discovery of such fossils has done nothing to end the debate over evolution. 

For instance, as transitional fossils began to accumulate, it became even more necessary to attack scientific findings on Earth’s age. If you can keep the Earth young, evolution has no time to work and can’t be true. So, as mentioned, creationists insist radiometric dating is flawed. Rocks cannot be millions of years old, thus the fossils encased within them cannot either. This amounts to nothing more than a denial of basic chemistry. Rocks contain elements, whose atoms contain isotopes that decay into something else over time at constant rates. So we can look at an isotope and plainly see how close it is to transformation. We know the rate, and thus can count backward. If researchers only had a single isotope they used, perhaps creationist would have a prayer at calling this science into question. But rubidium becomes stronium. Uranium changes to lead, potassium to argon, samarium to neodymium, rhenium to osmium, and more (see Rogers pp. 73-80 to explore further). This is something anyone devote study to, grab some rocks, and measure themselves. All creationists can do is say we aren’t positive that “the decay rate has remained constant”! Can you imagine someone saying that during Isaac Newton’s time gravity’s acceleration wasn’t 9.8 meters per second squared? Anyone can make stuff up!

(You’ll find most denials of evolution rest on denials or misunderstandings of the most basic scientific principles. Some creationists insist evolution is false because it betrays the Second Law of Thermodynamics, which states that the energy available for work in a closed system will decrease over time — that things fall apart. So how could simple mechanisms become more complex? How could life? What they forget is that the Earth’s environment is not a closed system. The sun provides a continuous stream of new energy. Similarly, some believe in “irreducible complexity,” the idea that complex systems with interconnected parts couldn’t evolve because one part would have no function until another evolved, therefore the first part would never arise, and thus neither could the complex system. But the “argument from complexity” fails per usual. [Other arguments, such as the “watchmaker” and “747” analogies, are even worse. Analogy is one of the weakest forms of argument because it inappropriately pretends things must be the same. No, a watch cannot assemble itself. That does not mean life does not evolve. Analogies fighting evidence are always doomed.] Biologists have discovered that parts can first be used for other tasks, as was determined for the bacterial flagellum, the unwise centerpiece of creationist Michael Behe’s skepticism. Independent parts can evolve to work together on new projects later on. Rogers writes:

Many hormones fit together in pairs like a lock and key. What good is the lock without the key? How can one evolve before the other? Jamie Bridgham and his colleagues studied one such pair and found that the key evolved first — if formerly interacted with a different molecule. They even worked out the precise mutations that gave rise to the current lock-and-key interaction.

A part of this process is sometimes scaffolding, where parts that helped form a complex system disappear, leaving the appearance that the system is too magical to have arisen. The scaffolding required to build our bridges and other structures is the obvious parallel.)

Let’s consider the fossils humanity has found. Tiktaalik was a fish with transitional structures between fins and legs. “When technicians dissected its pectoral fins, they found the beginnings of a tetrapod hand, complete with a primitive version of a wrist and five fingerlike bones… [It] falls anatomically between the lobe-finned fish Panderichthys [a fish with amphibian-like traits], found in Latvia in the 1920s, and primitive tetrapods like Acanthostega [an amphibian with fish-like traits], whose full fossil was recovered in Greenland not quite two decades ago.” Tiktaalik had both lungs protected by a rib cage and gills, allowing it to breath in air and water, like the West African lungfish and other species today. Its fossil location was actually predicted, as researchers knew the age and freshwater environment such a missing link would have to appear in (Coyne).

Ambulocetus had whale-esque flippers with toes (Rodhocetus is similar). Pezosiren was just like a modern manatee but had developed rear legs. Odontochelys semitestacea was an aquatic turtle with teeth. Darwinius masillae had a mix of lemur traits and monkey traits. Sphecomyrma freyi had features of both wasps and ants. Archeopteryx was more bird-like than other feathered dinosaurs (that’s feathered reptiles), yet not quite like modern birds. Its asymmetrical feathers suggest it could fly. The Microraptor gui, a dinosaur with feathered arms and legs, could likely glide. Other featured dinosaurs were found fossilized sleeping with their head tucked under their forearm or sleeping on a nest of eggs, just like modern birds (Coyne; see also Dawkins pp. 145-180).

Australopithecus afarensis, Australopithecus africanus, Paranthropus, Homo habilis, Homo erectus, and many more species had increasingly modern human characteristics. Less and less like a primate, closer and closer to modern Homo sapiens. Fossils indicate increasing bipedality (walking upright on two legs), smaller jaws and teeth, increasingly arching feet, larger brains, etc. (Also important to note are the increasingly complex tools and shelters found with such fossils. Homo erectus left behind huts, spears, axes, and bowls. Our planet had not-fully-human creatures crafting quite human-like things. Think on that. See The History of the World, J.M. Roberts.)

fossil-hominid-skulls.jpg

A: chimp skull. B-N: transitional species from pre-human to modern human. via Anthropology

It doesn’t stop there, of course. Evolution can been seen in both the obvious and minuscule differences between species.

See for example “From Jaw to Ear” (2007) and “Evolution of the Mammalian Inner Ear and Jaw” (2013). It was theorized that three important bones of a mammal’s ear — the hammer, anvil, and stirrup — were originally part of the jaw of reptilian ancestors (before mammals existed). In modern mammals there is no connecting bone between the jaw and the three inner-ear bones, but if there was an evolution from reptilian jaw bone to mammalian inner-ear bone, fossils should show transitional forms. And they do: paleontologists have found fossils of early mammals where the same bones are used for hearing and chewing, as well as fossils where the jaw bones and inner-ear bones are still connected by another bone.

Creationists have a difficult time imagining how species could evolve from those without wings to those with, from those that live on land to water-dwellers, from aquatic lifeforms back to land lovers, and so on, because they believe intermediary, transitional traits would be no good at all, could not help a creature survive. “What good is half a wing?”

Yet today species exist that show how transitional traits serve creatures well. Various mammals, marsupials, reptiles, amphibians, fish, and insects glide. It is easy to envision how reptiles could have evolved gliding traits followed by powered flight over millions of years. Or consider creatures like hippos, which are closely related to and look like terrestrial mammals but spend almost all their days underwater, only coming ashore occasionally to graze. They mate and give birth underwater, and are even sensitive to sunburn. Give it eons, and couldn’t such species change bit by bit to eventually give up the land completely? The closest living genetic relative to whales are in fact hippos (Coyne). And finally, what of the reverse? What of ocean creatures that head to land?Crocodiles can gallop like mammals (up-down spine flexibility) as well as walk like lizards (right-left spine flexibility; see Dawkins). The mangrove rivulus, the walking catfish, American eels, West African lungfish, four-eyed fish, snakeheads, grunions, killifish, the anabas, and other species leave the waters and come onto land for a while, breathing oxygen in the air through their skin or even lungs, flopping or slithering or squirming or walking to a new location to find mates, food, or safety. Why is it so difficult to imagine a species spending a bit more time on land with each generation until it never returns to the water?

“Half a wing” is not a thing. There are only traits that serve a survival purpose in the moment, like membranes between limbs for gliding. Traits may develop further, they may remain the same, they may eventually be lost, all depending on changes in the environment over time. Environment (food sources, mating options, predators, habitability) drive evolutionary changes differently for all species. That’s natural selection. When some members of a species break away from the rest (due to anything from mudslides to migration to mountain range formation), they find themselves in new environments and evolve differently than their friends they left behind. Coyne writes, “Each side of the Isthmus of Panama, for example, harbors seven species of snapping shrimp in shallow waters. The closest relative of each species is another species on the other side.” Species can change a little or change radically, unrecognizably, but either way they can be called a new species — in fact, unable to reproduce with their long-lost relatives, because their genes have changed too greatly. That’s speciation.

There is no question that the fossil record starts with the simplest organisms and, as it moves forward in time, ends with the most complex and intelligent — all beginning in the waters but not staying there. Single-cell organisms before multicellular life. Bacteria before fungi, protostomes before fish, amphibians before reptiles, birds before human beings.

If they wish, creationists can believe the fossil record reflects the chosen sequence of a logical God, even if it does not support the Judeo-Christian creation story (in which birds appear on the same “day,” Day 5, as creatures that live in water, before land animals, which appear on Day 6; the fossil record shows amphibians, reptiles, and mammals appearing long before birds — and modern whales, being descendants of land mammals, don’t appear until later still, until after birds, just 33 million years ago). Yet they must face the evidence and contemplate what it indicates: that a deity created fish, then later fish with progressively amphibious features, then later amphibians; that he created reptiles, then later reptiles with progressively bird-like features, then birds; and so forth. No discovery has ever contradicted the pattern of change slowly documented since Darwin. God is quite the joker, laying things out, from fossils to DNA, in a neat little way to trick humans into thinking we evolved from simpler forms (note: some creationists actually believe this).

Yes, the believer can simply claim these were all their own species individually crafted by God, with no ancestors or descendants who looked or acted any different. The strange fact that we have birds that cannot fly and mammals in the oceans that need to come up to the surface for air doesn’t engage the kind of critical thinking one might hope for. It’s all just a creative deity messing with animals!

 

Watching Evolution Occur

069.jpg

Renee, an albino kangaroo at Namadgi National Park, Australia. via Telegraph

Most creationists are in fact quite close to accepting evolution as true.

First, they accept that genes mutate and can change an individual creature’s appearance. They know, for instance, about color mutations. We’re talking albinism, melanism, piebaldism, chimeras, erythristics, and so on.

Second, most creationists accept what they call “microevolution”: mutations help individuals survive and successfully reproduce, passing on the mutation, changing an entire species generation by generation in small ways, but of course not creating new species. They accept that scientists have observed countless microevolutionary changes: species like tawny owls growing browner as their environments see less snowfall, Trinidad guppies growing larger, maturing slower, and reproducing later when predators are removed from their environments, green anole lizards in Florida developing larger toepads with more scales to escape invaders, and more, all within years or decades. They understand evolution is how some insects adapt to pesticides and some viruses, like HIV and TB, adapt to our vaccines over time, and how we human beings can create new viruses in the lab. They acknowledge that humanity is responsible, through artificial selection, or selective breeding, for creating so many breeds of dogs with varying appearances, sizes, abilities, and personalities (notice the greyhound, bred for speed by humans, closely resembles the cheetah, bred for speed by natural selection). In the same way, we’ve radically changed crops like corn and farm animals like turkeys (who are now too large to have sex), and derived cabbage, broccoli, kale, cauliflower, and brussels sprouts from a single ancestral plant, to better sate our appetites, simply by selecting individuals with traits we favor and letting them reproduce.

120711-BananaPhoto-hmed-1040a_files.grid-6x2.jpg

Wild banana (below) vs. artificially selected banana. via NBC News

The evidence presented thus far should push open-minded thinkers toward the truth, but for those still struggling to make the jump from microevolution to evolution itself, we are not done yet. The resistance is understandable given that small changes can easily be observed in the lab or nature, but large changes require large amounts of time — thousands, millions of years — and thus we mostly (but not entirely) have to rely on the evidence from DNA, fossils, embryology, and so on. Here are some points of perspective that can bridge the gap between small changes and big ones.

1. Little changes add up. If you accept microevolution, you accept that species can evolve to be smaller or bigger, depending on what helps them survive and reproduce. Scott Carroll studied soapberry bugs in the U.S. and observed some colonizing bigger soapberry bushes than normal; he predicted they would also grow larger, as larger individuals would be more successful at reaching fruit seeds. Over the course of a few decades, the bugs’ “beak” length grew 25%. That’s significant. Now imagine what could theoretically be done with more time. As Coyne writes, “If this rate of beak evolution was sustained over only ten thousand generations (five thousand years), the beaks would increase in size by a factor of roughly five billion…able to skewer a fruit the size of the moon.” This is unlikely to happen, but shows how little changes later yield dramatic results. Imagine traits other than size — all possible traits you can think of — changing at the same time and evolution doesn’t sound so impossible.

2. Genes are genes. This relates closely to the point above. If some genes can mutate, why can’t others? Genes determine everything about every creature. People who believe in microevolution accept that genes for size or color can change, but not genes for where your eyes are, whether you’re warm- or cold-blooded, whether you have naked skin or a thick coat of fur, whether you have a hoof or a hand, and so on. But there is no scientific basis whatsoever for this dichotomy of the possible. It’s simply someone claiming “These genes can mutate but not these, end of story” to protect the idea of intelligent design. Genes are genes. They are all simply sequences of nucleotides. As far as we know, no gene is safe from mutation.

2-goat.jpg

Octogoat, a goat with eight legs, born in Kutjeva, Croatia. via ABC News

3. Mutations can be huge. We’ve seen how humans can have tails, but we also see “lobster claw hands,” rapid aging, extra limbs, conjoined twins, and other oddities. Consider other mutations: snakes with two heads, octopi with only six tentacles, ducks with four legs, cats with too many toes. For the common fruit fly, the antennapedia mutation will mean you get legs where your antenna are supposed to be! Dramatic mutations are possible. Survival is possible. Passing on new, weird traits is possible. With evolution, sometimes groups with new traits totally displace and eliminate the ancestral groups; sometimes they live side-by-side going forward. If you came across a forest and discovered one area was occupied by two-headed snakes and another by single-headed snakes, all other traits being the same, wouldn’t you be tempted to call them different species? Declare something new had arisen on Earth?

4. We are currently watching evolution occur. Scientists have observed speciation. They’ve taken insects, worms, and plants, put small groups of them in abnormal environments for many generations, and then seen they can no longer reproduce with cousins in the normal environments because they have evolved. It’s easy to create new species of fruit flies in particular because their generations are so short. Evolution for other species is typically much slower, but significant changes are being observed.

Say you were instead on the African Savanna and came upon two groups of elephants. They are the same but for one startling difference: one group has no tusks. Like two-headed snakes, what a bold difference in appearance! Should we classify them as different species or the same? (Technically, they aren’t different species if they can still reproduce offspring together, but in the moment you aren’t sure.) Well, African elephants are increasingly being born without tusks. After all, those without are less likely to be killed by poachers for ivory. This is natural selection at work. Could not a changing environment and millions of years change more? Size, color, skin texture, hair, skeletal layout, teeth, and all other possible traits determined by all other genes?

Next, take a remarkable experiment involving foxes launched by Dmitry Belyaev and Lyudmila Trut in the Soviet Union in the late 1950s, which Trut is still running to this day. No, we can’t watch a species for 500,000 years to see dramatic evolution in action. But 60 years gives us something.

At the time, biologists were puzzled as to how dogs evolved to have different coats than wolves, since they couldn’t figure out how the dogs could have inherited those genes from their ancestors. Belyaev saw silver foxes as a perfect opportunity to find out how this happened. Belyaev believed that the key factor that was selected for was not morphological (physical attributes), but was behavioral. More specifically, he believed that tameness was the critical factor.

In other words, Belyaev wanted to see if foxes would undergo changes in appearance if they evolved different behaviors. So Belyaev and Trut set about taming wild silver foxes.

17686.jpg

Wild silver fox. via Science News

They took their first generation of foxes (which were only given a short time near people) and simply allowed the least aggressive to breed. They repeated this with every generation. They had a control group that was not subjected to selective breeding.

The artificial selection of course succeeded for fox behavior. They became much more open to humans, whining for attention, licking them, wagging their tails when happy. But there was more:

A much higher proportion of experimental foxes had floppy ears, short or curly tails, extended reproductive seasons, changes in fur coloration, and changes in the shape of their skulls, jaws, and teeth. They also lost their “musky fox smell.”

Spotted coats began to appear. Trut wrote that skeletal changes included shortened legs and snouts as well. Belyaev said they started to sound more like dogs (Dawkins). Geneticists are now seeking to isolate the genes related to appearance that changed when selectively breeding for temperament.

Belyaev was right. And his foxes, through evolution, came to look more and more like dogs. This is the same kind of path that some wolves took when they evolved into dogs (less aggressive wolves would be able to get closer to humans, who probably started feeding them, aiding survival; tameness increased and physical changes went with it).

If such changes can occur in just 60 years, imagine what evolution could do with a hundred million years.

NOVA_Dogs_crop.jpg

Dr. Lyudmila Trut with domesticated silver fox. via WXXI

 

In the Beginning

It’s true, scientists are still unsure how life first arose on Earth. And because it is an enduring mystery without hard evidence, scientists with hypotheses and speculations openly acknowledge this. Note that’s a big difference compared to evolution, which scientists speak confidently about due to the wealth of evidence.

But one professor at MIT believes that far from being unlikely, nonliving chemicals becoming living chemicals was inevitable.

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat… When a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Researchers have discovered lipids, proteins, and amino acids beneath the seafloor, suggesting the chemical interaction between the mantle and seawater could produce the building blocks of life. From there, time and proper conditions could give rise to the first self-replicating molecule. Evolution would then continue on, spending billions of years developing the diverse flora and fauna we see today (a single cell leading to complex life under the right conditions should not be so shocking; as J.B.S. Haldane said, “You did it yourself. And it only took you nine months”).

Determining precisely how the first cell arose is the next frontier of evolutionary biology, and it is exciting to be here to witness the journey of discovery. New findings and experiments will wipe away “watchmaker” arguments used against the first cell. They will once again crush the “God gap,” the bad habit of the faithful to fill gaps in our scientific knowledge with divine explanations. I imagine in our lifetime someone will successfully complete Stanley Miller’s famous 1950s experiment, in which he tried to recreate the Earth’s early conditions and create life itself.

Yet lack of knowledge concerning the beginning of life in no way hurts the case for evolution. Evolution is proven, as definitively as whether the earth orbits the sun.

Why I Am Not a Communist (Nor an Anarchist)

Having criticized the authoritarian communist states that arose in the 20th century, in particular the Bolsheviks in Russia for crushing worker power, and having also explored the basic tenets of anarchism (and how it is the father of the blasphemous bastard child that is anarcho-capitalism/libertarianism), I wanted to devote some time to musing over the merits of communism and anarchism relative to socialism.

While all anti-capitalist, these ideologies are not the same and should not be confused. I therefore include basic outlines (leaving out many different subtypes of each) before considering their relative advantages and downsides. I attempt to present each in their most ethical, idealized form (most free, most democratic, and so forth). Criticisms of ideologies should not be mistaken as disrespect for my Marxist comrades who think differently.

Communism destroys capitalism from the top-down. The government, as an instrument of the people, owns all workplaces and organizes the economy and the workers according to a central plan that meets citizen needs. Under this system, competition can be wholly and more easily eliminated, making the enormous pressure to put profits over people a thing of the past. Wasteful and redundant production goes away with it, meaning more workers and resources for more important tasks that build a better society (for example, no more energy and billions spent on advertising, instead diverted to education). Further, the national wealth can be easily divided up among the people, public sector salaries enriching all.

However, communism entails enormous challenges. It surely requires giving up the full freedom to choose your line of work – if your community or national plan only allows for a certain number of bookstores or bookstore workers, there may not be room for you. You would be rejected upon applying with the local or national government to open a new bookstore (as you would surely have to do for a plan, and thus communism, to function) or upon applying for a job at an established bookstore. Under communism, workers are supposed to “own” their workplaces because they “own” the State, but this is a rather indirect form of control that leaves some people wanting. You may have options regarding the work you do, but you will have to sacrifice your interests for the sake of the plan.

Of course, as long as you don’t find yourself under authoritarian communism, you would help decide the plan, at the ballot box. But how much would you help? That raises a second challenge: can communism function without representative government (or a worse concentration of power)? A common notion is that the workers, the people, would elect members of their worker councils to participate in the design and execution of the national plan (or elect representatives from their geographic community, as is done in politics today). So if you worked in auto manufacturing while waiting for a bookstore job to open up, you would run or elect someone for the honor and task of representing the American Auto Workers Council on the National Planning Committee. The representatives, using a broad array of data on what goods and services are need where, and what resources and workers will be needed to create and distribute, would craft a central plan for a certain number of years.

Can this enormous power be socialized further? We understand the risks of representative governance – concentrated power is more easily influenced and corrupted, and doesn’t give people a direct say over their destinies. Even with the disappearance of capitalist businesses, a small group of decision-makers would still face enormous pressures from countless localities, people, and organizations. We could see to it that the people have a direct up or down vote on the plan after the representatives craft it (or other checks and balances). But eliminating a representative structure entirely seems impossible. Imagine the daunting task of voting on how much corn the U.S. should grow in a given three-year period. On how many more workers are needed to produce a higher number of epipens. On how many homes should be built in a city on the other side of the country. (It very much seems that you must make this vote on national matters, rather than simply voting on what your local community needs. If each municipality democratically decided what they needed, these decisions would have to be reconciled at the national level, as there may not be the resources to do everything every community decides to do. Like the would-be bookstore worker, some communities will not get what they wanted, making the vote a sham. And, naturally, trying “communism” at local levels, where communities can only use the workers and resources within their communities, leaves massive inequities between regions. It might be possible to instead divide up the national wealth to each region somewhat according to its need and then let each decide how to use its allotted funds, but how much each city or town should get would also be impossible to sensibly sort out using direct democracy.)

Organizing an economy is a monumental task requiring mountains of accurate, up-to-date data. How difficult for an elected body of experts – a full-time job with a high risk of costly mistakes and turmoil. Can workers devote the time and study to make educated decisions on what to produce, their quantities, prices, and required manpower and resources, for an entire country? Would not voting itself, on thousands or hundreds of thousands of economic details, take days, weeks, or months? And if the people cannot be expected to plan the economy via direct vote, how can they be expected to make an informed up-down vote on a plan formulated by others? There seems to be no escaping representative government with communism. These challenges suggest this system may not be preferable.

Anarchism does away with capitalism from the bottom-up. Workplaces would be owned and run by workers, would federate to coordinate activities rather than compete, and local communities would make all decisions democratically. The State, as a hierarchical structure like capitalism, would be abolished. In this way, people would be free as possible from compulsion, authority, and concentration of power, enjoying individual freedoms as long as they do not hurt others. You’d have equal power to make decisions that affect you, joining in your local citizen assembly and worker council. Anarchism harkens back to the era of “primitive communism” we explored elsewhere.

Anarchists have differing views on whether capitalism can be dismantled after the State. Does the State have a vital role to play in capitalism’s eradication? Anarchist H.G. Wells, among others, thought only socialism could make anarchism possible:

Socialism is the preparation for that higher Anarchism; painfully, laboriously we mean to destroy false ideas of property and self, eliminate unjust laws and poisonous and hateful suggestions and prejudices, create a system of social right-dealing and a tradition of right-feeling and action. Socialism is the schoolroom of true and noble Anarchism, wherein by training and restraint we shall make free men.[1]

The challenge with anarchism is that, like “local communism,” it leaves communities to fend for themselves, meaning poorer peoples beside richer ones. Unless, of course, communities worked together, sharing workers and resources, in a movement toward the integration of larger and larger units and the necessary joint administration (however democratic), weakening local control and journeying down the path toward what are essentially nations. Further, if you avoided that, while a spirit of human oneness could indeed rise with the disappearance of nations, one wonders what is to stop factionalism based on community identity. Is pride and loyalty to a neighborhood, town, or city not predictable? One worries about true global solidarity. In the same vein, individual anarchist communities seem vulnerable to rivalry and conflict, especially if they differ in wealth, habitability, and so on. It all sounds a bit like the city-states of ancient Greece, albeit less capitalistic and more democratic. At the least, such a world seems more prone to conflict than one with a single government spanning all continents and meeting the needs of all people. Some form of State may be preferred for its ability to protect people.

Skeptics of anarchism may also see that statement as the answer to the question of crime, which, while being greatly reduced, is not likely to disappear entirely with the abolition of poverty (think of crimes of passion over infidelity, for instance). Yet anarchists typically despise the police – the personification of force, authority, and State violence. Can the police be made a thing of the past?

Socialist George Orwell wrote, “I worked out an anarchistic theory that all government is evil, that the punishment always does more harm than the crime and the people can be trusted to behave decently if you will only let them alone.” But he concluded, “It is always necessary to protect peaceful people from violence. In any state of society where crime can be profitable you have got to have a harsh criminal law and administer it ruthlessly.”[2]

Here Orwell lacks nuance and vision – of community policing, proportionate punishment, restorative justice, rehabilitation, and so on – which do not require a State; they can be done on an intimate, local level. Skeptics can rest easy on this point. The relevant task of anarchism (and socialism or communism) is to build a more humane, peaceful, fair criminal justice system that does not morph into what came before.

Then there’s socialism. “I should tie myself to no particular system of society other than that of socialism,” as Nelson Mandela would say.[3] Socialism also eliminates capitalism from the bottom-up. As under anarchism, workers collectively own their workplaces, making decisions democratically and equitably sharing the profits of their labor, and such worker cooperatives can federate with each other to reduce competition and coordinate their creations and service. The State exists to serve various needs of the people, such as guaranteed healthcare and employment, and is in fact under the people’s direct democratic control (this was explored in detail in What is Socialism?). The problems with anarchism and communism can be avoided. Socialism is the human future.

 

Notes

[1] New Worlds for Old, H.G. Wells

[2] The Road to Wigan Pier, George Orwell

[3] 1964 court speech, Nelson Mandela. http://www.motherjones.com/politics/2013/12/nelson-mandela-epitaph-own-words-rivonia/

The Scope of False Sexual Assault Allegations

When conservatives are confronted by the rise of a “liberal” cause, many find and point to a small problem in order to discredit or divert attention from the immense problem liberals are attacking.

It’s an unhealthy mix of the whataboutism fallacy (citing wrongs of the opposing side instead of addressing the point) and the false equivalence fallacy (describing situations as equivalent [I’ll add “in scope”] when they are not). We observe this during talk on racial violence, when many conservatives pretend hate crimes against whites are just as common as hate crimes against people of color; see “On Reverse Racism.”

Lately, the fallacy was on full display as high-profile men across the country were accused of sexual assault and harassment, many fired or urged to resign. In this frenzy of allegations, some Americans see and cheer a surge in bravery and collective solidarity among victims inspired by each other and seeking justice, while others see and decry a male “witch hunt,” with evil women growing more bold about their lies, perhaps on the George Soros payroll. Where you land is a fairly decent predictor of your political views. Who was accused also determined for many which women to believe, with some conservatives supporting Republican Roy Moore through his rape of underage girls scandal but attacking Democrat Al Franken’s groping of women. Sadly, some liberals did the reverse. I know I witnessed a left-leaning acquaintance or two trying to discredit accusations against Franken, that he publicly apologized for, by slandering the victims. Still, it is typically conservatives (often sexually frustrated men) who, when they encounter liberals talking about rape, sexual assault, sexual harassment, toxic masculinity, and so forth, bring up false rape accusations.

One comment on a mediocre article Men’s Health shared on how to make sure you have consent from a woman typified this. There were of course countless like it, many poorly written: “And remember if she regrets it the next day you’re still fucked”; “I bring my attorney and a notary on all dates and hook ups”; “There’s no such thing as consent anymore, it’s a witch hunt. Just say no gentleman”; “Don’t forget guys… If you have drank 12 drinks and she has 1 sip of beer…… You raped her.” And still more angry with the article’s existence: “Men’s health turning into click bate leftist agenda”; “Did a feminist write this?”; “Did a woman write this?” It’s sad consent is a liberal, feminist scheme. But this comment got much attention and support, likely because people found it thoughtful and measured for some odd reason:

This is a touchy subject. Yes, respect women—We all know that. Have a woman’s consent—Yes, we all know that. Do not rape or sexually assault a woman—Yes we all know that. We respect the rules. However, there are some women that exploit and take advantage of the rules. It’s sad to say, there are some out there that falsely accuse a man of rape or sexual assault—ruining their lives. Being a man in today’s era, I’m afraid to ask a woman on a date. I feel sometimes a man needs a contract just to protect himself. Yes, this might sound objectionable and supercilious—but you can’t be too careful nowadays. We live in a different time now. Men: We need to change our attitudes and treatment of women. However, it’s okay that we protect ourselves—and we shouldn’t be demonized or vilified for doing so. I don’t want to be viewed or portrayed as the enemy, nor be apologetic for being a man.

An amusing writing. “We all know” not to rape, assault, or harass women? If the collective male “we” legitimately “knew,” such things would be a thing of the past and a primer on consent unnecessary. “We live in a different time” where men are “afraid to ask a woman on a date”! If you’re going to “protect” yourself in some way, you wouldn’t be “demonized” for actually getting consent in some formal sense; only if you used illegal and unethical methods to “protect” yourself, like the secret filming of sex. And where are these women asking men to apologize for being a men, rather than for specific behaviors or attitudes that make them uncomfortable, scared, unsafe, or physically violated?

This is a perfect example of the fallacy above. “Men sexually assault women and shouldn’t, but what about the women who make false accusations?” The latter part is clearly his main concern — he didn’t stop by to condemn rapists, he came with another purpose. They may not intend to or even realize it (some do), but when men (or women) do this they position false reports as a problem of the same significance or nearing the same significance as actual sex crimes. As if the scope, the prevalence, is comparable. That’s what taking a conversation on consent and redirecting it to one of false accusations does. It says, “This is what’s important. It’s what we should be talking about.” It’s like bringing up asthma when everyone’s discussing lung cancer. It deflects attention away from a problem that is much more severe. It’s a subtle undermining of the credibility of rape victims. It’s not wrong to discuss small problems, of course, but they should always be kept in perspective. It’s my view that comments about hate crimes against whites or false accusations against men that don’t include the enormous asterisks that these are minuscule percentages of overall hate and sex crimes should never have been uttered at all. In that way, we can think about others first. We can protect the credibility of real victims. We can remain rooted in the facts — not imply a small problem is large, or vice versa. Naturally, including those caveats undermines the usual function of bringing up these issues, but no matter.

Yes, lying about sex crimes in an issue that exists. Yes, there should be some legal punishment for such an immoral act (not anywhere near the punishment for sexual assault and harassment, obviously, because these are not in any way morally equivalent crimes). Yes, people are innocent until proven guilty, which is why men are safe from prison until they see their day in court, even if they face social consequences like losing a job due to presumed guilt — which you can oppose on ethical grounds, but not so stable ground as you would hope, especially when a man is accused by a coworker, family member, or someone else in close proximity. Is it most ethical to oppose a firing until a trial and risk keeping a rapist around the workplace? Putting others in danger? Forcing a victim to clock in next to him each day? Or is it most ethical to fire him and risk tearing down the life of an innocent man? It’s an unpleasant dilemma for any employer, university administrator, or whomever, but ethically there’s not much question. One risk is far graver, thus the answer is simple. This only grows more axiomatic when we acknowledge the likelihood of events.

The prevalence of proven false accusations of sexual assault is somewhere between 2% and 8% of cases. The National Sexual Violence Research Center documents a 2006 study of 812 cases that found 2.1% were false reports, while a 2009 study of 2,059 cases and a 2010 study of 136 cases estimated 7.1% and 5.9%, respectively. Research from 2017 revealed a 5% false claim rate for rape. The Making a Difference Project, using data from 2008, estimates 6.8%. These numbers are mirrored in prior American decades and in similar countries. While we can acknowledge that some innocent people in prison never see justice, are never set free, since 1989 there have only been 52 men released from prison after it was determined their sexual assault charges were based on lies. This compared to 790 murder exonerations; the number of people in state prisons for murder vs. sexual assault/rape is about the same (though the former crime is far less common than the latter), making the low exoneration rate for sex crime convictions all the more significant.

Myriad definitions of both “false report” and “sexual assault” make the precise percentage difficult to nail down, and these statistics only address proven false reports (there are many cases in limbo, as conservative writers are quick to point out), but this research gives us a general idea. Reports of high percentages of false claims are typically not academic studies or have rather straightforward explanations, for example when Baltimore’s “false claim” rate plunged from 31% to under 2% when the police actually went through some training and “stopped the practice of dismissing rapes and sexual assaults on the scene”! It’s remarkable how legitimate investigations and peer-reviewed research can bring us closer to the truth.

In other words, when observing any sexual misconduct scandal, there is an extremely high chance the alleged victim is telling the truth. This is why we believe women. This is why they should be given the benefit of the doubt, not accused men. It’s why the moral dilemma for employers and the like is hardly one at all. Were precisely 50% of sexual assault allegations lies, it would still be most ethical to take the risk of firing a good man rather than the risk of keeping a predator around. But since women are most always telling the truth? Well, the decision is that much easier and ethical.

In the U.S., there are some 321,500 rapes and sexual assaults per year, and 90% of adult victims are women (you’ve probably noticed how “men are raped too” is used in a similar manner to all this). One in six women are rape or attempted rape survivors. For every 1,000 rapes, 994 perpetrators (99%) will never go to prison.

Which U.S. Wars Actually Defended Our Freedoms?

When pondering which of our wars literally protected the liberties of U.S. citizens, it is important to first note that war tends to eradicate freedoms. Throughout U.S. history, war often meant curtailment of privacy rights (mass surveillance), speech rights (imprisonment for dissent), and even the freedom to choose your own fate (the draft).

It also should be stated upfront that this article is only meant to address the trope that “freedom isn’t free” — that military action overseas protects the rights and liberties we enjoy here at home (even if virulent bigotry meant different people had very different rights throughout our history and into our present). It will not focus on the freedoms of citizens in other nations that the U.S. may have helped establish or sustain through war, nor non-American lives saved in other countries. However, it will address legitimate threats to American lives (such a right to life is not de jure, but expected).

As a final caveat, I do not in any way advocate for war. That has been made exceptionally clear elsewhere. While violence may at times be ethically justified, in the vast majority of cases it is not, for a broad array of reasons. So nothing herein should be misconstrued as support for imperialism or violence; rather, I merely take a popular claim and determine, as objectively as possible, if it has any merit. To a large degree I play devil’s advocate. To say a war protected liberties back home is not to justify or support that war, nor violence in general, because there are many other causes and effects to consider which will go unaddressed.

In “A History of Violence: Facing U.S. Wars of Aggression,” I outlined hundreds of American bombings and invasions around the globe, from the conquest and slaughter of Native Americans to the drone strikes in Yemen, Pakistan, Somalia, and elsewhere today. It would do readers well to read that piece first to take in the scope of American war. We remember the American Revolution, the Civil War, the World Wars, Korea, Vietnam, Iraq, and the War on Terror. But do we recall our bloody wars in Guatemala, Haiti, Mexico, and the Philippines? Since its founding in 1776, 241 years ago, the United States has been at war for a combined 220 years, as chronicled by the Centre for Research on Globalization (CRG). 91% of our existence has been marked by violence.

How many of those conflicts protected the liberties of U.S. citizens? How many years did the military literally defend our freedoms?

Well, what precisely is it that poses a threat to our freedoms? We can likely all agree that what qualify as freedoms are 1) rights to actions and words that can be expressed without any retribution, guaranteed by law, and 2) the total avoidance of miseries like enslavement, imprisonment, or death. Thus, a real threat to freedom would require either A) an occupation or overthrow of our government, resulting in changes to or violations of established constitutional liberties, or B) invasions, bombings, kidnappings, and other forms of attacks. If you read the article mentioned above, it goes without saying the U.S. has much experience in assaults on the freedoms of foreign peoples. Much of our violence was the violence of empire, with the expressed and sole purpose of seizing natural resources and strengthening national power.

So what we really need to ask is how close has the U.S. come to being occupied or U.S. citizens attacked? How many times have either of these things occurred? We must answer these questions honestly. Should it be said fighting Native American or Mexican armies protected freedom? No, the only reason our nation exists is because Europeans invaded their lands. We will include no war of conquest, from our fight with Spain over Florida to our invasion of Hawaii. We killed millions of innocent people in Vietnam. Were they going to attack America or Americans? No, we didn’t want the Vietnamese to (democratically) choose a Communist government. Now, you can believe that justifies violence if you wish. But the Vietnam War had nothing to do with defending our freedoms or lives. Neither did our invasion of Cuba in 1898. Nor our occupation of the Dominican Republic starting in 1916. Nor our wars with Saddam’s hopelessly weak Iraq. Nor many others.

Using this criteria, my estimate to the titular question is that only four wars, representing 19 years, could reasonably meet Qualification 1 (some also meet the second qualification). These conflicts protected or expanded our liberties by law:

The American Revolution (1775-1783): While the Revolution was partly motivated by Britain’s moves to abolish slavery in its colonies, it did expand self-governance and lawful rights for white male property-holders.

The War of 1812 (1812-1815): While U.S. involvement in the War of 1812 had imperialist motives (expansion into Indian and Canadian territories) and economic motives (preserving trade with Europe), Britain was kidnapping American sailors and forcing them to serve on their ships (“impressment”). This war might have simply been included below, in Qualification 2, except for the fact that Britain captured Washington, D.C., and burned down the Capitol and the White House — the closest the U.S. has ever come to foreign rule.

The Civil War (1861-1865): Southern states, in their declarations of independence, explicitly cited preserving slavery as their motive. Four years later, slavery was abolished by law. Full citizenship, equal protection under the law, and voting rights for all men were promised, if not given.

World War II (1941-1945): The Second World War could also have simply been placed in Qualification 2 below. Beyond freeing Southeast Asia and Europe from the Axis, we would say the U.S. was protecting its civilians from another Pearl Harbor or from more German submarine attacks on trade and passenger ships in the Atlantic. Yet it is reasonable to suppose the Axis also posed a real threat to American independence, the only real threat since the War of 1812.

Had Germany defeated the Soviet Union and Britain (as it might have without U.S. intervention), establishing Nazi supremacy over Europe, it is likely its attention would have turned increasingly to the United States. Between the threat of invasion from east (Germany) and west (Japan), history could have gone quite differently.

German plans to bomb New York were concocted before the war; Hitler’s favorite architect described him as eager to one day see New York in flames. Before he came to power, Hitler saw the U.S. as a new German Empire’s most serious threat after the Soviet Union (Hillgruber, Germany and the Two World Wars). Some Japanese commanders wanted to occupy Hawaii after their attack, to threaten the U.S. mainland (Caravaggio, “‘Winning’ the Pacific War”). After Pearl Harbor, the U.S. did not declare war on Germany; it was the reverse. Japan occupied a few Alaskan islands, shelled the Oregon and California coasts, dropped fire balloons on the mainland, and planned to bomb San Diego with chemical weapons. Germany snuck terrorists into New York and Florida. The Nazis designed their A-9 and A-10 rockets to reach the U.S., under the “Amerika Bomber” initiative. Also designed were new long-range bombers, including one, the Silbervogel, that could strike the U.S. from space. Hitler once said, “I shall no longer be there to see it, but I rejoice on behalf of the German people at the idea that one day we will see England and Germany marching together against America.” While an Axis invasion of the United States is really only speculation, it has some merit considering their modus operandi, plus an actual chance at success, unlike other claims.

19 years out of 220 is just 8.6% (we’ll use war-time years rather than total years, erring on the side of freedom).

Qualification 2 is harder to quantify. U.S. civilians in danger from foreign forces is a far more common event than the U.S. Constitution or government actually being in danger from foreign forces. We want to include dangers to American civilians both at home and overseas, and include not just prolonged campaigns but individual incidents like rescue missions. This will greatly expand the documented time the military spends “protecting freedom,” but such time is difficult to add up. Many military rescue operations last mere weeks, days, or hours. The Centre for Research on Globalization’s list focuses on major conflicts. We’ll need one that goes into detail on small-scale, isolated conflicts. We’ll want to look not just at the metric of time, but also the total number of incidents.

But first, we will use the CRG list and its year-based metric to consider Qualification 2. The following wars were meant, in some sense, to protect the lives of U.S. citizens at home and abroad. They do not meet the first qualification. Conflicts listed in Qualification 1 will not be repeated here. Five wars, representing 36 years, meet Qualification 2:

The Quasi-War (1798-1800): When the United States refused to pay its debts to France after the French Revolution, France attacked American merchant ships in the Mediterranean and Caribbean.

The Barbary Wars (1801-1805, 1815): The United States battled the Barbary States of Tripoli and Algiers after pirates sponsored by these nations began attacking American merchant ships.

The Anti-Piracy Wars (1814-1825): The U.S. fought pirates in the West Indies, Caribbean, and Gulf of Mexico.

World War I (1917-1918): The Great War nearly found itself in Qualification 1. After all, Germany under Kaiser Wilhelm II made serious plans, in the 1890s, to invade the United States so it could colonize other parts of Central and South America. During World War I, Germany asked Mexico to be its ally against the U.S., promising to help it regain territory the U.S. stole 70 years earlier. However, invasion plans evaporated just a few years after 1900, and Mexico declined the offer. The Great War appears here for the American merchant and passenger ships sunk on their way to Europe by German submarines (not just the Lusitania).

The War on Terror (1998, 2001-2017): It is very difficult to include the War on Terror here because, as everyone from Osama bin Laden to U.S. intelligence attests, it’s U.S. violence in the Middle East and Africa that breeds anti-American terror attacks in the first place. Our invasions and bombings are not making us safer, but rather less safe by widening radicalism and hatred. However, though this predictably endless war is counterproductive to protecting American lives, it can be reasonably argued that that is one of its purposes (exploitation of natural resources aside) and that killing some terrorists can disrupt or stop attacks (even if this does more harm than good overall), so it must be included.

36 years out of 220 is 16.4%. Together, it could be reasonably argued that 25% of U.S. “war years” were spent either protecting our constitutional rights from foreign dismemberment or protecting citizen lives, or some combination of both.

But we can also look at the total number of conflicts this list presents: 106. Four wars out of 106 is 3.8%, another five is 4.7%. Let’s again err on the side of freedom and split the Barbary and Terror wars into their two phases, making seven wars for 6.6%. Adding 3.8% and 6.6% gives us 10.4% of conflicts protecting freedom.

Any such list is going to have problems. What does it include? What does it leave out? Does it describe the motivation or justification for violence? Does it do so accurately? Should recurring wars count as one or many? Does the list properly categorize events? This list labels U.S. forces violating Mexican territory to battle Native Americans and bandits as repeated “invasions of Mexico.” If Mexican forces did the same to the U.S., some of us would call it an invasion, others might rephrase. And couldn’t these incursions into a single nation be lumped together into a single conflict? Oppositely, the list lumps scores of U.S. invasions and occupations of most all Central and South American nations into a single conflict, the Banana Wars — something I take huge issue with. The solution to issues like these is to either create a superior list from scratch or bring other lists into the analysis.

Let’s look at “Instances of Use of United States Armed Forces Abroad,” a report by the Congressional Research Service (CRS). It is a bit different. First, it includes not just major conflicts but small, brief incidents as well, and it’s smarter about lumping conflicts together (no Banana Wars, no Anti-Piracy Wars, but the U.S. incursions into Mexico to fight Native Americans and bandits are listed as one conflict). Thus, 411 events are documented. Second, even this is too few, as the list begins at 1798 rather than 1776. Third, it does not include wars with Native Americans like the first list. This list is highly helpful because the CRS is an agency of the Library of Congress, conducting research and policy analysis for the House and Senate, and thus its justifications for military action closely reflect official government opinion.

We will apply the same standards to this list as to the last. We’ll include the nine conflicts we studied above if the timeframe allows, as well as any events that have to do with civilians, piracy, and counter-terrorism. We will thus modify 411 events in this way:

– 38 incidents/wars that involved “U.S. citizens,” “U.S. civilians,” “U.S. nationals,” “American nationals,” “American citizens,” etc.

– 9 incidents/wars related to “pirates” and “piracy” (does not include the rescue of U.S. citizen Jessica Buchanan, already counted above, nor Commodore Porter’s vicious 1824 revenge attack on the civilians of Fajardo, Puerto Rico, who were accused of harboring pirates)

– 6 official conflicts: the Quasi-War (“Undeclared Naval War with France”), two Barbary Wars, the War of 1812, and two World Wars (the Revolution does not appear on this list due to its timeframe; the Anti-Piracy Wars are included above, the War on Terror below)

+ 1 Civil War (it must be added, as it is not included on this list because it did not involve a foreign enemy)

– 27 incidents/wars related to combating “terrorism” or “terrorists”

That gives us 81 events that match Qualifications 1 and 2. 81 out of 412 is 19.7% — thus about one-fifth of military action since 1798 in some way relates to protecting Constitutional freedoms here at home or the right to life and safety for U.S. civilians around the globe. Of course, were we to only look at Qualification 1, we would have but three events — the War of 1812, the Civil War, and World War II — that preserved or expanded lawful rights, or 0.7% of our wars since 1798.

The CRS list does not break down some incidents into times shorter than years, and documenting those that are by days, weeks, or months is an enormous chore for a later day. Thus the estimation for time spent defending freedom will have to come from the CRG list: 25% of the time the military is active it is involved in at least one conflict that is protecting freedom. Also, just for some added information, there are 20 years on the CRS list where there is not a new or ongoing incident. That’s since 1798. This is almost identical to the 21 years of peace since 1776 in the CRG analysis. So of the 219 years since then, we’ve spent 91% of our time at war, the same as the CRG list since 1776 (or trimmed to 1798).

(A list created by a professor at Evergreen State College goes from 1890-2017 and has five years of peace. We’ve been at war 96% of the time since 1890. It lists 150 conflicts, with only 3 having to do with rescues or evacuations of Americans [2%], 11 having to do with the War on Terror in Arabia and Africa in 1998 and after 9/11 [7.3%], plus World War I [0.6%]. That’s 9.9% for Qualification 2. Throw in another 0.6% for World War II, and thus Qualification 1, and we have 10.5% of conflicts since 1890 protecting freedom. Because this list begins so late, however, we will not use it in our averaging. Doing so would require us to trim the other lists to 1890, cutting out the piracy era, the Revolution, the Civil War, etc.)

Averaging the percentages from the two lists relating to total conflicts gives us 2.3% for Qualification 1 and 15% for Qualification 2. 17.3% all together. Trimming the CRG list to begin at 1798 yields about the same result.

In sum, it could be reasonably asserted that the U.S. military protects our freedoms and lives in 17.3% of conflicts. (If we take out the War on Terror for its deadly counter-productivity, which I would prefer, that number drops to 10.8%, with 17% of war years spent defending American freedom.)

Even Better Than ‘Angels in the Outfield’

Remember the movie Angels in the Outfield? It’s the classic story of Roger, a foster kid who prays for God to help the Angels win the pennant so that his dad will come back. (Sounds like one truly twisted deal, but Roger’s dad wasn’t at all serious. If we’re being honest, Roger seems old enough to have known about figurative language.)

If your memory is as decrepit as the cheap VCR tape of this movie in the box in your basement, this image may help:

Screen Shot 2017-11-27 at 2.08.52 PM

Jesus, Roger looks uncomfortable in this picture. I don’t remember him being on the verge of tears in this scene. This looks like the beginning of an episode of Law and Order: SVU. CHUNG-CHUNG.

This is the scene in which Roger and his best buddy J. P. meet the indelibly cheerful Angels manager George Knox, who grows from skeptic to believer about the whole angels-playing-baseball thing (Roger is the only one that can see them). When Roger does see one, he’s like:

giphy

That’s where that hand motion comes from if you ever see people (me) doing this during a baseball game. The Royals once used the theme music to the movie when someone hit a home run, and I could never understand why I was the only one at Kauffman Stadium doing this while it played.

Also: That moment you realize Roger was played by Joseph Gordon-Levitt of 500 Days of SummerInception, and Dark Knight Rises.

2d274908031122-tease-joseph-gordonlevitt-today-19032015_f206664b5e962821f9652b05c637eb98

Angels in the Outfield is truly the greatest baseball movie of all time (bite me, Kevin Costner), therefore I in no way compare the Kansas City Royals to it casually. But without question, in every arena the Royals’ story rivals and surpasses Roger’s. This is such big news, I’m surprised more media attention hasn’t been paid to it.

 

KANSAS CITY’S PAIN IS GEORGE KNOX’S PAIN

fRLdWQO

George Knox hates to lose. Can any clip better represent the boiling rage lurking beneath the skin of every Royals fan, just waiting to detonate, through all the miserable seasons of the past years, when Kansas City was the laughingstock of Major League Baseball?

A clip of a nuke wouldn’t suffice. It has to be George Knox marching through a locker room of two dozen half-naked losers and absolutely destroying their fruit and meat platters. That is the pain Royals fans felt after every season–no, every game–before the Royals’ meteoric rise.

And this is Knox after becoming manager rather recently. Multiply this rage by 29 years, and you’ll understand Kansas City’s agony. There’s no comparison.

Even this bloody movie made us look like total twits. Why does this guy not slide? What is he doing?

82zqwaD

 

MIRACLES CAN HAPPEN

Roger’s story is fictional, with fictional managers, ballplayers, and angels. At least, I hope angels don’t look like this:

1445613876-cl

Honestly, this angel looks like either the uncle you pray to God won’t sit next to you at Thanksgiving or the aunt that’s visibly ready to call your favorite music the work of Satan before you even tell her what it is. Not really sure which one at this point.

But the Royals’ story?

This isn’t a movie. And no players appear to defy physics as an angel lifts them into the air. It’s simply incredible baseball. It’s real life. That’s an important reason the Royals’ story is better.

Eric-Hosmer-Kansas-City-Royals.png

Consider last year: Riding Jeremy Guthrie’s 7-inning shutout to beat the White Sox 3-1 on September 26, clinching their first playoff berth in 29 years. Four days later, staging a roaring comeback against the Oakland A’s in the do-or-die American League wild card game, down 3-7 but leveling the game in the 9th inning, eventually winning 9-8 in the 12th, after nearly 5 hours of play.

Sweeping both the American League division and championship series, earning the most consecutive wins in MLB postseason history. Making it to Game 7 of the World Series against the San Francisco Giants, but experiencing the most painful of defeats.

And this year: Winning their first American League Central title since 1985 on September 24 against the Mariners. On the brink of elimination in Game 4 of the AL division series against the Astros, down 4 runs in the 7th, and smashing in 5 runs in the 8th inning and piled on more in the 9th to win the game 9-6. They won the series in the next game.

Winning Game 6 of the AL championship series versus the Blue Jays by Lorenzo Cain scoring from first base on Eric Hosmer’s single, with closer Wade Davis shutting down the Blue Jay’s comeback threat, a runner on first and third.

And last night, Game 1 of the 2015 World Series, verses the New York Mets. Alcides Escobar’s inside-the-park homer, the first in the World Series since 1929, the year the Great Depression began. Winning 5-4 after 14 innings, the longest game in World Series history.

Could all this possibly be topped by the story of guys who only made it to the postseason with divine intervention in sparkling pajamas?

tumblr_inline_n270fiP3Sq1qa12tx

No. They’re cheaters.

Also, that’s Matthew McConaughey being picked up there. Swear to God. As he later said from the driver’s seat of a Lincoln, “Sometimes you’ve got to go back…”

Adrien Brody is also a ballplayer in this movie. McConaughey, Gordon-Levitt, Brody, Danny Glover, Tony Danza, Christopher Lloyd…seriously, is there anyone this film doesn’t have?

 

THE SMARMY SPORTSCASTER

It has actor and concept art model for Mr. Incredible Jay O. Sanders. He plays Ranch Wilder.

Roger and George Knox had to deal with Ranch Wilder, the “voice of the Angels,” who makes it clear throughout the film he very much wants the Angels to lose. He hates George Knox, and is constantly being a Debbie Downer about the Angel’s postseason prospects.

hqdefault

Royals fans get Joe Buck.

ovc1zqgeabdkv7lqwscp

Buck took a lot of heat during the 2014 World Series for what Royals fans perceived to be bias, in support of the Giants…and one pitcher in particular.

Ranch Wilder got fired. Buck is still going strong, back to call this 2015 World Series.

This just makes a better story. No one really seemed to mind Ranch Wilder’s Angel-bashing in the film. He was only fired because he left his mic on when he really went berserk.

But Kansas City’s story has more conflict, more passion and intrigue. Buck is back, and a lot of KC fans are enraged, enough to start petitions and even call the games themselves.

 

THAT ONE FAN THAT GETS A LOT OF SCREEN TIME AND NO ONE KNOWS WHY

RadiantBelovedHamster-thumb360

Remember this guy? He’s that one fan in the crowd the movie focuses on, and likely the only human who has ever needed to professionally wax the sides of his neck.

He thinks Roger is crazy for seeing angels, he accidentally sits on Christopher Lloyd’s angel character, takes a baseball in the mouth, and at one point screams, “Hemmerling for Mitchell?! Go back to Cincinattiiiiiiii!” Classic quote.

Why is he always on screen? Why does he get so much attention? Why is that so obnoxious? In a way, he’s kind of the movie’s version of…of…

141024123919-marlins-man-01-story-top

Marlins Man.

This mysterious and no doubt totally loaded figure has been spotted behind home plate throughout this postseason and the one in 2014, and works his way to other sports championships as well.

Always on screen, he is the one fan that gets any attention. He gets national attention! Yes, he donates a ton of money to charities, but what of the other 37,000 people in the stands? What about their stories? He leaves them in the dust.

It’s all an intentional thing. He picks his seat so he can be on camera. He loves to rep his completely irrelevant team, which has hopefully fired its graphic design staff by now.

Because he’s desperate as a toddler for attention, I think he successfully one-ups the blowhard from Angels in the Outfield. And anyone who disagrees with me is, to quote J. P., a “Nacho Butt.”

 

PUSHING THROUGH THE LOSS OF A PARENT

As mentioned, Roger is a foster kid. About two-thirds into the movie, his deadbeat dad–the same one who said if the Angels won the pennant he and Roger could “be a family again”–abandons Roger for good.

“Sorry, boy,” Dad of the Year says as Roger rushes up to him, excited to tell him about how well the Angels are doing. Dad pats Roger on the cheek and walks away, leaving Roger to try to croak out “Where are you going?” before he begins to weep.

Screen Shot 2017-11-27 at 2.33.20 PM.png

If you’re a kid from a stable home watching this movie, it truly influences you, seeing someone your own age abandoned by his father. Not to mention Roger’s mother died, as did J. P.’s dad. Their stories are fictional, yet you know in the back of your mind while watching that millions of children experience abandonment, foster care, homelessness, or have parents deceased or in jail. The movie, unlike the vast majority of children’s films, makes you think about the suffering of others and how to persevere through pain.

And if a fictional story about this is powerful, how much more so is real life?

Sadly, three Royals lost a parent this season.

Mike Moustakas lost his mother Connie on August 9, while Chris Young lost his father Charles on September 26. As reported by The Kansas City Star, Young pitched the next day to honor his dad, and went 5 innings without giving up a hit.

Edinson Volquez pitched last night, in Game 1 of the World Series. His father Daniel died just before the game, and Volquez’s family requested that Royals manager Ned Yost not tell Volquez until after he pitched.

edinson-volquez-101615-getty-ftrjpg_lvveh1cbrtd51uxuw0zocfv83

In other words, the world knew of Volquez’s father’s death before Volquez.

Through all this, the Royals have persevered. Moustakas said after the game, “For all the stuff that’s happened this year, to all of our parents…it has to bring you closer together.”

Eric Hosmer said, “It’s just another angel above, just watching us and behind us through this whole run.”

 

A HAPPY ENDING?

The Angels in the movie won the pennant (we’re kind of left to wonder about the World Series). Roger and his best friend J. P. get adopted by George Knox and live happily ever after.

I don’t know if Ned Yost will adopt any players, nor if the Royals will finally, after 3 decades, win it all. But there is one thing I know to be true, that applies to touching movies and real life alike:

“It could happen.”

ANGELS IN THE OUTFIELD, Milton Davis Jr., Danny Glover, Joseph Gordon-Levitt, 1994, (c)Buena Vista P