Guaranteed Income vs. Guaranteed Work

Living in a socialist society would mean awakening each workday and heading to your worker cooperative, while regularly visiting your voting place to help decide local and national policies. But it is more than that—and has to be. The State has a few important services to provide if the socialist dream of prosperity and dignity for all people is to be achieved.

What if, for instance, you cannot find a job? Just because all workplaces are democratic and share profits does not mean there will always be enough jobs when and where you need one. There is no room in a socialist nation for unemployment, poverty, homelessness, and so on, and thus some mechanism is needed to guarantee that we only see these horrors in history books. Every person, regardless of who you are or what work you do, should make enough to have a comfortable life—which requires a high minimum wage (required by law but inherent in worker ownership) and guaranteed access to an income. There are two paths forward to eradicating the horrors, stated succinctly by Dr. King: “We must create full employment, or we must create incomes.”[1] Guaranteed work or a guaranteed income. Either would be adequate, but there are positives and negatives of each to weigh.

Let’s first consider a guaranteed income, or universal basic income (UBI). All UBI entails is using tax revenue to send a regular check to each citizen, a simple redistribution of wealth to eradicate poverty and provide security during times of unemployment or underemployment. Its simplicity is a major advantage over guaranteed work.

UBI has been around for a while in various forms. Alaska has given $1,000-$2,000 a year to every resident without condition since 1982.[2] Hawaii may follow suit soon.[3] The Eastern Band of the Cherokee Nation launched its own UBI in 1996, and today gives $10,000 a year to each of its members, which has helped reduce behavioral problems and crime.[4] Iran from 2010 to 2016 had the world’s first national UBI, giving each family the equivalent of $16,300 a year.[5] For one year, 2011, Kuwait gave $3,500 to each citizen.[6] In 2017, Macau, a region of China, began giving over $1,100 a year to each permanent resident.[7]

Trials in some of India’s villages that began in 2011 show huge success in improving children’s education, access to food and healthcare, and the total number of new business startups.[8] Other past small-scale experiments were conducted in the U.S., Canada, Brazil, Namibia, and elsewhere. Models range from everyone getting the same amount to poorer recipients getting more while richer ones less (which even some conservatives support in the form of the Earned Income Tax Credit or even a negative income tax[9]). Studies indicate that when people have this financial security they spend more time taking care of family, more time focusing on education, and are able to win higher raises at work because they have a more serious option to leave, leverage they did not have before.[10] Contrary to myth, giving poor people cash tends to have no impact on or reduce alcohol and tobacco consumption, likely because paying for healthcare, education, and so forth is suddenly an option and people want to direct their resources there.[11] In 2017, experiments with UBI launched or were preparing to launch in various places in Finland, Canada, Kenya, Uganda, the Netherlands, Scotland, Spain, and the U.S.[12]

“A guaranteed annual income could be done for about twenty billion dollars a year,” Dr. King estimated in 1967. “If our nation can spend thirty-five billion dollars a year to fight an unjust, evil war in Vietnam, and twenty billion dollars to put a man on the moon, it can spend billions of dollars to put God’s children on their own two feet right here on earth.”[13] The question of priorities in spending is as relevant as ever. The cost of American UBI would depend on similar factors: how much would be guaranteed, if everyone would receive it (if the rich do not then it’s not technically UBI, but no matter), and so on. $10,000 a year for all 240 million U.S. adults is $2.4 trillion, $15,000 a year for the poorest 50 million people is $750 billion, etc. Of course, the net cost would be lower, as giving tens of millions or hundreds of millions of people greater purchasing power would put the economy into overdrive—that money would be spent, enriching co-ops and thus increasing State tax revenues (this is also why economic research overwhelming shows higher minimum wages do not lead to higher unemployment or prices; extra money is spent at businesses, boosting their profits, balancing the system out[14]). “People must be made consumers by one method or the other,” King said when discussing guaranteed income or work.[15] One study estimated giving each American adult $1,000 a month would grow the economy 12-13% over eight years, or by $2.5 trillion, if employment remained steady.[16] It is important to keep the cyclical nature of this system in mind while considering costs. UBI is expensive, but it also increases tax revenue.

Now, major concern exists that UBI will cause people to stop working, hurting the economy and leaving the worker-owners stuck supporting the easy lifestyle of the lazy. As we have seen, at some point in the human future automation will essentially make labor a thing of the past, highlighting the need for both collective ownership of the machines and State-provided incomes. So it seems obvious that at some point we will have to give up our agitation over people who do not work (rather, poor or middle income people who do not work; critics seem less concerned about the wealthy types who enjoy work-free lives). We won’t be able to absurdly base people’s value on how many hours they work or what sort of work they do. Everyone will spend their days as they see fit, some choosing to design skyscrapers (even though machines could do it for them) because they enjoy it, others doing nothing all day because they enjoy that more. But until machines can serve our every need, the point is a valid one, as some people will indeed prefer not to have a job, while supported by the labor of others. (On the positive side, there would be decreased competition for jobs for those seeking them.) This wouldn’t bother all worker-owners, but it would be reality. In five experiments on guaranteed income done in the U.S. and Canada, the decrease in the labor participation rate ranged from zero to 30%.[17] However, most studies show no effect or only a small decline.[18] Donald Rumsfeld and Dick Cheney ran UBI experiments in a few cities for President Nixon, and found work rates remained steady.[19] A study of Iran’s UBI revealed some people worked a bit less, but some actually worked more.[20] India’s basic income grants led to more labor, as did Uganda’s.[21] Namibia saw no negative effects on labor participation.[22] Naturally, the decline depends on how much is received, but it is predictable that UBI will mean some people will choose not to work. Importantly, with so much to do to rebuild and maintain our society, is UBI yet wholly practical? Will enough citizens volunteer to participate in all the unpleasant tasks that make a society function, such as repaving roads or waste disposal, if a high income is guaranteed? Would necessary tasks remain undone because Americans would want to pursue other things? These nagging questions will spur some to throw out the whole idea, insist the monthly amount must be low enough to force people to get jobs, or propose a higher UBI for people willing to do unpleasant work. All told, UBI would have to be implemented strategically, perhaps beginning at a level that eradicates poverty and slowly increasing as humanity approaches the point where machines can take care of all undesirable duties.

Guaranteed work is a more complex system, but avoids the concerns associated with lower labor participation. In fact, there would be a job for all. “If Government in our present clumsy fashion must go on,” Ralph Waldo Emerson said in 1843, “could it not assume the charge of providing each citizen, on his coming of age, with a pair of acres, to enable him to get his bread honestly?”[23] In a society offering guaranteed work, federal tax revenue could be transferred to municipalities to create salaries for unemployed or underemployed people. City governments would use the funds to launch public work projects to improve their communities (what projects would be a local democratic decision, of course). So if a city has 50,000 people looking for work at the start of the year, it might receive $2 billion, to offer a $40,000 salary to each person. If the U.S. had 8 million unemployed, it would cost $320 billion to employ them—half our modern military budget. Prioritization is easy enough. Dr. King said, “If America does not use her vast resources of wealth to end poverty and make it possible for all of God’s children to have the basic necessities of life, she too will go to hell.”[24] As with UBI, however, broadening purchasing power will reduce the net cost through increased tax revenues.

Workers can be hired to rebuild our crumbling inner cities, install solar panels on homes, plant trees, tutor struggling students, spend time with neglected seniors—literally any task that betters society in some way. Because not all positive tasks require physical labor, the program would be inclusive of many persons with disabilities or even seniors who want to work (though obviously not intended to replace social security or disability insurance). Cities will need more funds than just those for salaries, however, sums dependent on the type of project. Some projects will be relatively cheap, like cleaning trash off the streets, others more expensive, like renovating a school. Extra funds could nevertheless be fixed to a city’s unemployment level. Using their allotted monies, cities could contract with local co-ops to supply equipment and raw materials for necessary ventures. Public workers would also receive help securing employment at a cooperative, where higher incomes, democracy, and ownership can be enjoyed, so that the public sector doesn’t continually grow. Rather than shrink the private sector, however, guaranteed work programs can actually expand it—fewer unemployed persons means more spenders, benefiting businesses and allowing them to expand.[25]

Co-ops could also receive federal funds, allowing them to take on more worker-owners. This needn’t be a permanent relationship. The State could fund a position for a year, giving a co-op time to absorb a new member. Cooperatives would get another worker, and thus greater productivity and more profits, for nothing, in return for guaranteeing the worker a permanent job and ownership after the year ended. Co-ops could further receive government contracts to do certain projects, as businesses do today, with increased employment stipulations. Alternatively, cities could organize unemployed persons into new cooperatives, helping fund the endeavor during the first few years, until it became self-sustaining (whether for-profit or nonprofit). If there was a need for greater production in a certain sector, from agriculture to social work, that need could be met with new co-ops.[26]

There is much precedent for guaranteed work. Generally speaking, employment by the State is something we take for granted. Critics of paying citizens to work often have no qualms over paying citizens to be soldiers or police officers. If one can be called necessary for protection, the other can be called necessary for poverty’s demise. Local governments across the U.S. employ 14.1 million people, over half of them in education, the rest in healthcare, fire and policing, financing and administration, transportation, library services, utilities, environment and recreation—and public works.[27] (States employ another 5 million, and the federal government employs over 2.5 million civilians and over 2 million active and reserved military personnel.[28]) More specifically, during the Great Depression, President Roosevelt’s Works Progress Administration, Civil Works Administration, and Civilian Conservation Corps hired some 15.5 million people to build roads, bridges, schools, hospitals, museums, and zoos; to garden, plant trees, fight fires, reseed land, save wildlife, and sew; to undertake art, music, drama, education, writing, and literacy projects. While not without challenges, public works saved many families from hunger, strengthened the consumer class and thus the economy, and beautified the country.[29] Roosevelt actually included “the right to a useful and remunerative job” in his 1944 Second Bill of Rights.[30] Similar federal initiatives have occurred since, such as the Comprehensive Employment and Training Act of the 1970s, which employed 750,000 people by 1978.[31] (In countless other programs, like the Public Works Administration of the 1930s, the U.S. government indirectly created jobs by paying businesses to tackle huge projects. Construction of the Interstate Highway System in the 1950s and 60s entailed the federal government funding the states, which either expanded their public workforces or contracted with private companies.) Today, cities like Reno, Albuquerque, Tempe, Fort Worth, Chicago, Denver, Portland, and Los Angeles offer jobs to the homeless to help them out of the social pit. Cities elsewhere in the world do the same.[32]

Governments around the world run programs similar to our New Deal. India is pouring billions into the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA), which gives, or rather tries to give, residents of a few poor, rural states one hundred days of guaranteed work annually.[33] 50 million households, 170 million people, are involved—the largest public works program in world history.[34] Other nations, especially in Europe, have made the government the employer of last resort at various times.[35] So have South Africa and Argentina. Argentina’s Jefes de Hogar program paid the heads of household with children, persons with disabilities, or pregnant women to do community service, construction, and maintenance work. 2 million Argentinians, 5% of the population, were employed at its height.[36] South Africa’s Expanded Public Works Program includes government jobs in infrastructure, tourism, environment, early childhood education, and more.[37] As in the U.S., local, state, and national governments around the world may not offer guaranteed work but do offer public works jobs. These efforts and countless others have dealt serious blows to unemployment and poverty. “We must develop a federal program of public works, retraining, and jobs for all,” Dr. King said, “so that none, white or black, will have cause to feel threatened.”[38]

One criticism of guaranteed work is that unemployment dropping too low will herald inflation. It is said if unemployment is eliminated then businesses will have to compete for fewer workers, driving wages up, which will drive up the cost of everything else to compensate, which will lead to higher wage demands, all in an unending upward wage-price spiral. This is not actually as grave a concern as one might imagine. First, the correlation between unemployment and inflation is not terribly strong: sometimes they move in opposite directions, sometimes they move together.[39] It’s easy to see why more workers doesn’t necessarily mean higher prices. Increased profits from more consumers spending more money help firms absorb higher wage costs without raising prices. Again, even drastic increases in the minimum wage create only tiny increases in prices, making the wage increase plainly worth it.[40] To stay competitive there is every incentive for firms to expand production, and thus sales, or take a bite out of profits rather than raise prices on consumers. Further, more spending means more demand, creating a downward pressure on prices. Many economists have argued persuasively that, contrary to William Phillips, Milton Friedman, and others, full employment can be achieved without inflation.[41]

Second, if upward wage pressure became so great it could not be absorbed, and prices rose, there is reason to predict this would be a brief phase, not an eternal spiral. It is not likely the upward pressure on wages would last. Say the public worker salary was set at $38,000 a year (we’ll say that is also the minimum wage). If you worked for a capitalist firm making $38,000, you would likely be able to convince the capitalist to give you a raise—otherwise you could leave, guaranteed to make the same in the public sector. You win a raise and are then making $40,000. But if you continue pushing over time, the potential loss due to ultimate failure (being let go, replaced by someone cheaper, someone from the public sector wanting to make more) rises—it’s at $2,000 now and will only get bigger.[42] So there is a disincentive that keeps higher wage demands down. The capitalist may get rid of you and you’ll be worse off financially than you were. A guaranteed job gives people more power and leverage, but not so much to create an inflationary disaster; with limits on the upward pressure of wages come limits on price increases, which tend to be tiny proportions of income increases anyway. At a cooperative, as raises are determined democratically, the majority would have to repeatedly vote to both give raises to all and to raise prices on consumers—this seems just as unlikely, perhaps more so, as a single capitalist continuously doing this.

Third, more production of goods and services through the public sector, like increased purchasing power, increases supply and thus pulls price down.[43] Fourth, various effective tactics the State uses to control inflation will still exist under socialism.[44] In practice, at least regarding partial guaranteed employment and public works ventures, skyrocketing inflation is a nonissue. The Reserve Bank of India found that the MGNREGA program did not raise food prices.[45] We know that Argentina’s inflation was extremely high in 2002, when its works program began, but declined and remained relatively low past 2007, when the program ended, until 2013.[46] South Africa’s ongoing program began in 2004; inflation grew by over 10% by 2009, during economic crisis, but then fell and remained low through 2018.[47] The four points above also answer concerns about UBI and inflation. Further, studies of Alaska, Kuwait, Lebanon, Mexico, India, and African nations have at least shown that a small UBI does not cause inflation.[48]

Whether UBI, guaranteed work, or a combination of both (guaranteed work followed by UBI, for example, so no one is stuck doing pointless work for a city while co-op members get rich off machines that can do all tasks) is implemented, one of these strategies will be necessary as a safety net for those struggling to find a job. With it we can eradicate need and want forever. “Overcoming poverty is not a task of charity, it is an act of justice,” Nelson Mandela said. “Like Slavery and Apartheid, poverty is not natural. It is man-made and it can be overcome and eradicated by the actions of human beings.”[49] Either system would have other significant effects on society, too, such as replacing many older forms of welfare, freeing people from the fear of quitting a job they do not enjoy, giving people greater ability to strike—a tactic that may not entirely disappear with worker ownership, as some worker-owners may be so opposed to a majority decision they walk out—and more.[50]



[1] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).








[9] Milton Friedman, Free to Choose




[13] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).


[15] Martin Luther King, “Where Do We Go From Here?” (speech, 11th Annual Southern Christian Leadership Conference Convention, Atlanta, GA, August 16, 1967).











[26] Alec Nove, Essential Works of Socialism, 555


























For the Many, Not the Few: A Closer Look at Worker Cooperatives

After pointing out the authoritarian hierarchy of the capitalist workplace—the capitalist chief at the top wielding ultimate decision-making power and owning the wealth created by the workers—John Stuart Mill envisioned instead the “association of laborers themselves on terms of equality, collectively owning the capital with which they carry on their operations, and working under managers elected and removable by themselves.”[1]

Socialistic worker cooperatives are the humane alternative to capitalist businesses. In a worker cooperative, you become a company owner soon after being hired. All workers share equal ownership of the firm, from custodian to spokesperson. This translates to equality in power (all decisions are made democratically) and in wealth (company shares and incomes are the same for everyone). Just like that, the exploitation of labor by and authoritarian power of the greedy few are consigned to the dustbin of history, replaced by cooperation, equity, and democracy. Workers control their own destinies, deciding together how they should use the profits created by their collective labor, be it improving production through technology, taking home bigger incomes, opening a new facility, hiring a new worker, lowering the price of a service, producing something new, and all other conceivable matters of business.

With the disappearance of hierarchy and exploitation comes the elimination or great alleviation of other crimes of capitalism we’ve explored. When worker-owners invest in new technologies that increase productivity and require less human labor, they won’t fire themselves—they can make more money and/or work fewer hours, bettering their standard of living and spending more time with family or doing things they enjoy. They will not outsource their own jobs to Bangladesh, either. Their greater wealth will reduce poverty, their greater purchasing power easing the throes of recession and depression (as would less competition, were cooperatives to federate). If co-ops were adopted on a national or global scale, the stock market might disappear, or at least substantially change, as the workers might want to keep all the shares of their company. Transparency and democracy should make a firm less likely to commit the kinds of profit-driven abuses against people, planet, and peace, because there are more players influencing decisions; the wider the field, the less likely everyone would feel comfortable with, say, poisoning our biosphere to make a buck. This is not to say that laws prohibiting the production of vehicles that run on fossil fuels would be unnecessary. They would. Rather, it is simply to say there would be more room for dissent in a workplace and a greater chance of a more moral or safe alternative being adopted. Socialism is not a cure for all our problems, just many of them.

Some criticisms of worker cooperatives can be easily dismissed with simple philosophical and theoretical arguments. There’s the desire of capitalists and would-be capitalists to have all the power and hoard the wealth. Well, this is about being more ethical than that, having the empathy to support the common good, not selfish ends. As Dr. King said, “True compassion is more than flinging a coin to a beggar; it comes to see that an edifice which produces beggars needs restructuring.”[2] There’s the consternation at the thought of a majority of workers with little to no experience with a task overruling a worker with experience and knowledge of said task. What does the graphic designer know of welding processes and how to best use or improve them? How can we let younger, newer, brasher salespeople make policy for the veteran salesperson? Well, first, it’s important to acknowledge that both fresh blood and odd ideas from outside a field can at times prove beneficial, a spark of innovation and positive change. Second, many worker cooperatives make it a point to train all workers in multiple or all areas of the business, lessening the knowledge gap with education, training, and staff development. Some even rotate jobs! (On-the-job training and shared knowledge is a key factor for success in co-ops where most founders have no business experience.[3]) Third, a cooperative environment encourages workers to listen carefully to those with greater experience, knowing that deference will be reciprocated later. Fourth, most business decisions, if found to be ineffective or harmful, can be reversed before a total collapse of the company, just like in business today. Lastly, even if a shortsighted, unknowledgeable majority ran the cooperative—their cooperative—into the ground because they stubbornly refused to listen to the wisdom of the experts, there is nevertheless something satisfactory about the democratic nature of this failure. Under capitalism, the stupidity of a single capitalist can destroy a business, wiping out jobs for everyone. Under socialism, the workers democratically determine their own destiny. It may be a disaster, but it’s your disaster, collectively speaking. But, as we will see, cooperatives are in no way more likely to fold.

Cooperative work is as old as humanity itself, as we have seen. Worker cooperatives in their modern form have existed around the world since the Industrial Revolution began and capitalism took off, that is, before Marx’s writings.

The U.S. has a rich history of cooperative enterprises that continues to this day.[4] No, they are not always perfect. While some exemplify precisely the socialist vision, others could be more egalitarian or democratic (for example, many make use of elected managers or executives with slightly larger salaries, which can be easier with larger companies; others are too slow at granting ownership rights). But they are all a giant step up from capitalist firms. The U.S. has an estimated 300-400 cooperatives, everything from the 4th Tap Brewing Co-Op in Texas to Catamount Solar in Vermont, employing 7,000 workers (the average size is 50 people) and earning $400 million in revenue each year. (If you’ve heard it’s more like tens of thousands of cooperatives making billions, such inflated numbers are only possible by including credit unions, “purchasing co-ops,” independent farmers aiding each other through “producer co-ops,” Employee Stock Ownership Plans, and other structures that, while valuable, don’t exactly qualify.) 26% of them used to be capitalist-structured businesses.[5] Converting is a great way to preserve a business and protect people’s livelihoods; when small business capitalists retire, the vast majority of the time they do not find a buyer nor are able to pass ownership on to family, so the enterprise simply ends and workers are thrown out.[6] Cooperatives represent all economic sectors, and have annual profit margins comparable to top-down businesses—the idea that they are less efficient is a myth (not that efficiency has to be more important than democracy and equality anyway). 84% of the workers are owners at a given time.[7] Many firms are members of the U.S. Federation of Worker Cooperatives, a growing organization. Because people are put before profits, most cooperatives have a particular focus on community improvement and development, for example the Evergreen Cooperatives in Ohio. One study found food co-ops reinvest more money from each dollar in the local economy.[8]

America’s largest co-op, the Cooperative Home Care Associates in New York, has grown to 2,300 employees, about half of which are owners (to become an owner one pays $1,000 in installments). It is 90% owned by minority women. With $64 million in profits in 2013, the CHCA provides wages of $16 an hour (twice the market rate), a highest- to lowest-paid worker ratio of 11:1, flexible hours, and good insurance. Its governing board is elected; profits are shared. The company has a turnover rate that is a quarter of the industry standard. Some workers left behind minimum wage jobs and are now making $25 an hour. People say they stay because the co-op lifted them out of poverty and as owners they have decision-making power.[9] Ralph Waldo Emerson wrote in The Conduct of Life (1860), “The socialism of our day has done good service in setting men to thinking how certain civilizing benefits, now only enjoyed by the opulent, can be enjoyed by all.”[10] People who join the Women’s Action to Gain Economic Security (WAGES) co-ops in California see their incomes skyrocket 70-80%.[11]

As one might expect, workers are more invested in a company when they are also owners, which translates into better business outcomes. Though they are not without challenges, a review of the extant research reveals co-ops have the same or greater productivity and profitability than conventional businesses, and tend to last longer; workers are more motivated, satisfied, and enjoy greater benefits and pay (with no evidence of increased shirking), information flow improves, and resignations and layoffs decline.[12] They are more resilient during economic crises.[13] Many studies come from Europe, where cooperatives are more widespread and more data has been collected. In Canada, worker cooperatives last on average four times longer than traditional businesses.[14] Their survival rates are 20-30% better.[15] Research on France’s cooperatives revealed that worker-owned enterprises were more productive and efficient, and over a four-year period cooperative startups actually outnumbered capitalistic startups.[16] French capitalist-turned-cooperative businesses have better survival rates than capitalist businesses by significant margins, 10-30%.[17] Analyzing cooperatives across the U.K., Canada, Israel, France, and Uruguay, one study found that cooperatives had similar survival rates to traditional businesses over the long term, but better chances of making it through the crucial early years. Italy and Germany experience the same.[18] Italian co-ops are 40% more likely to survive their first three years; Canadian co-ops about 30% more likely in the first five years and 25% more likely in the first ten years; in the U.K., twice as many cooperatives survive the first five years than traditional firms.[19] In Italy’s Emilia Romagna region, an economic powerhouse of that nation and Europe, two-thirds of residents belong to worker cooperatives.[20] In Spain, a study of a retail chain that has both top-down stores and cooperative ones revealed the latter have much stronger sales growth because worker-owners have decision-making power and a financial stake.[21] In the U.S., much research has been done on businesses with Employee Stock Ownership Plans, which are called “employee-owned” because employees are given stock, but most are not democratic nor totally owned by the workers (Publix and Hy-Vee are examples). ESOPs are only one-third as likely to fail compared to publicly traded businesses, suffer less employee turnover, and are more productive.[22] One rare study on American plywood worker cooperatives found they were 6-14% more efficient in terms of output than conventional mills.[23] When the economy declined, conventional mills attacked worker hours and employment, whereas the worker-owners agreed to lower their pay to protect hours and jobs.[24] Given the benefits of worker cooperatives, places like New York City, California, and Cleveland are investing in their development, recognizing their ability to lift people out of poverty and thus strengthen a consumer economy, plus offer an opportunity to focus on alleviating systemic barriers to work and wealth that minorities, former felons, and others face in the United States.[25] This is no small matter. The egalitarian structure and spirit of solidarity inherent in co-ops can help win equality for the oppressed and disadvantaged. While perfect by no means, women tend to have more equitable pay and access to more prestigious positions in co-ops.[26] 60% of worker-owners in new American co-ops in 2012 and 2013 were people of color.[27] 90% of worker-owners at one of Spain’s co-ops are people with disabilities.[28] Italian cooperatives are more likely to hire folks who have been unemployed for long periods, often a major barrier to work.[29]

Spain has one of the strongest cooperative enterprises, no surprise to those who know Spain’s Marxist history.[30] (In the 1930s, George Orwell marveled at Barcelona, writing that his visit “was the first time that I had ever been in a town where the working class was in the saddle. Practically every building of any size had been seized by the workers… Every shop and cafe had been collectivized… Waiters and shop-walkers looked you in the face and treated you as an equal.”[31]) Mondragon Cooperative Corporation is a federation of over one hundred socialistic workplaces around the globe and in many economic sectors, from retail to agriculture. It is one of Spain’s largest corporations and the largest cooperative experiment in the world, with over $10 billion in annual revenue and 74,000 workers. Those who are worker-owners have shares of the business and the ability to run for a spot in the General Assembly, the federation’s democratic body of power, which elects a Governing Council. However, each cooperative is semi-autonomous, having its own, smaller democratic body. The manager-worker pay ratio is capped at 6:1.[32] In rough economic times, worker-owners decide democratically how much their pay should be reduced or how many fewer hours they should work, and managers take the biggest hits. This stabilizes an entity during recession, avoiding layoffs. So does job rotation and retraining. Further, Mondragon has the ability, as a federation, to transfer workers or wealth from successful cooperatives to ones that are struggling.[33] Due to these flexibilities, Mondragon cooperatives going out of business is nearly unheard of. When it does happen, the federation finds work for the unlucky workers at other member co-ops.[34] During the Great Recession, Mondragon’s number of workers held steady, and the Spanish county where it is headquartered was one of the least troubled.[35] The enterprise, however, has major faults. It actually owns more subsidiary companies than cooperatives—capitalistic, exploitive businesses in poor countries where workers are not owners. Also egregious: less than half of all Mondragon employees are actually owners.[36] Nevertheless, the business is a step in the right direction, indicating socialistic workplaces can function large-scale. (In fact, on average co-ops tend to have more employees that top-down firms.[37]) Mondragon is a member of the International Co-operative Alliance, the leading global association for the movement.

There are 11.1 million worker-owners worldwide.[38] When we include folk who work for cooperatives but are not owners, our total rises to 27 million.



[1] Mill, Principles of Political Economy

[2] King, “Beyond Vietnam,” April 4, 1967, New York City Riverside Church


[4] Curl, For All the People






[10] Emerson, The Conduct of Life





















[31] Orwell, “Homage to Catalonia”


[33]; Putting Democracy to Work, by Frank Adams and Gary Hansen, p. 145





[38] (p. 25, Table 1). If we add in people who are self-employed but members of “producer cooperatives” that support them (farmers and fishermen, for instance, especially in Asia), 280 million people are involved in cooperative employment. Bringing these workers into the analysis would also swell the U.S. numbers mentioned earlier.

Yes, Evolution Has Been Proven

Evolution is a simple idea: that over time, lifeforms change. In a small timespan, changes are subtle yet noticeable; in a massive one, changes are shockingly dramatic — descendants look nothing like their ancestors, becoming what we call new species.

Changes occur when genes mutate during the imperfect reproduction process, and are passed on if the mutation helps an individual creature escape predators, find food or shelter, or attract a mate, allowing it to more successfully reproduce than individuals without its new trait (natural selection). Some mutations, of course, hurt chances of survival or have no impact at all.

Naturalist and geologist Charles Darwin provided evidence for this idea in his 1859 book On the Origin of Species and other works, and over the century and a half since, research in multiple fields has consistently confirmed Darwin’s idea, irreparably damaging religious tales of the divine creation of life just as it exists today.


The Myths of Man

While many people of faith have adopted scientific discoveries such as the age of the earth and evolution into their belief systems, many have not. Hardline Christian creationists still believe humans and all other life originated 6,000 years ago, with a “Great Flood” essentially restarting creation 4,000 years ago, with thousands of “kinds” of land animals (tens of thousands of species) rescued on Noah’s ark. 

The logical conclusion of the story is utterly lost on believers. There are an estimated 6.5 million species that live on land today, perhaps 8-16 million total species on Earth (that’s a conservative estimate; it could be 100 million, as most of our oceans remain unexplored). People have catalogued 2 million species, discovering tens of thousands more each year. Put bluntly, believing that in four millennia tens of thousands of species could become millions of species requires belief in evolution at a pace that would make Darwin laugh in your face.

To evolve the diversity of life we see today, much time was needed. More than 4,000 years, a planet older than 6,000 years. We know the Earth is 4.5 billion years old because radioactive isotopes in terrestrial rocks (and crashed meteors) decay at consistent rates, allowing us to count backward. Fossil distribution, modern flora and fauna distribution, and the shape of the continents first indicated the continents were once one, and satellites proved the continents are indeed moving apart from each other at two to four inches per year, again allowing us to count backward (Why Evolution is True, Jerry Coyne). When we do so, we do not stop counting in the thousands.

Naturally, criticisms of myths can be waved away with more magic, which is why it’s mostly futile to tear them apart, something I learned after wasting time doing so during my early writing days. Perhaps God decided to make new species after the flood. Perhaps he in fact made millions of species magically fit on a boat roughly the size of a football field, like a bag from Harry Potter. It’s the same way he got pairs of creatures on whole other continents to, and later from, the Middle East; how one family, through incest, rapidly evolved into multiple human races immediately after the flood (or did he make new human beings, too?); how a worldwide flood and the total destruction of every human civilization left behind no evidence. The power of a deity — and our imagination — can take care of such challenges to dogma. But it cannot eviscerate the evidence for evolution. Science is the true arrow in mythology’s heel.

Still, notions of intelligent design bring up many curious questions, such as why a deity would so poorly design, in identical ways, the insides of so many species (see below), why said deity would set up a world in which 99% of his creative designs would go extinct, and so on.

It seems high time we set aside ancient texts written by primitive Middle Eastern tribes and listened to what modern science tells us. And that’s coming from a former creationist.


It Wasn’t Just Darwin


Charles Darwin, 1809-1882. via Britannica

Creationists attempt to discredit evolution by attacking the reliability and character of Darwin, but forget he was just one man. Darwin spent decades gathering the best evidence for evolution of his day, showed for the first time its explanatory powers across disciplines (from geography to embryology), and brought his findings to the masses with his accessible books. But there were many who came before him that deepened our and his understanding of where diverse life came from and how the biblical Earth wasn’t quite so young. For example:

  • In the sixth century B.C., the Greek philosopher Anaximander studied fossils and suggested life began with fishlike creatures in the oceans.
  • James Hutton argued in the 1700s that the age of the earth could be calculated based on an understanding of geologic processes like erosion and the laying down of sediment layers.
  • In 1809, Jean-Baptiste Lamarck theorized that physical changes to an individual acquired during its life could be passed to offspring (a blacksmith builds strength in his arms…could that lead to stronger descendants?).
  • By the 1830s, Charles Lyell was putting Hutton’s ideas to work, measuring the rate at which sediments were laid, and counting backward to estimate Earth’s age.
  • Erasmus Darwin, Charles’ grandfather, suggested “all warm-blooded animals have arisen from one living filament,” with “the power of acquiring new parts…delivering down those improvements by generation.”
  • Alfred Wallace theorized natural selection independently of and at the same time as Charles Darwin!

In other words, if it wasn’t Darwin it would have been Wallace. If not Wallace then someone else. Like gravity or the heliocentric solar system, the scientific truth of evolution could not remain hidden forever.

Creationists also seize upon Darwin’s unanswered questions and use them to argue he “disproved” or “doubted” the validity of his findings. For example, Darwin, in his chapter on “Difficulties of the Theory” in The Origin of Species, said the idea that a complex eye “could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree.”

Emphasis on seems. He went on to say:

When it was first said that the sun stood still and the world turned round, the common sense of mankind declared the doctrine false… Reason tells me, that if numerous gradations from an imperfect and simple eye to one perfect and complex, each grade being useful to its possessor, can be shown to exist, as is certainly the case; if further, the eye ever slightly varies, and the variations be inherited, as is likewise certainly the case; and if such variations should ever be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, cannot be considered real.

In other words, the evolution of eye is possible and there is no real difficulty in supposing this given other evidence he had found. Darwin knew he was not the end of the line. He made predictions concerning future discoveries, and supposed that other scientists would one day show how eyes could develop from non-existence to simple lenses to complex eyes, as they indeed have. It began with cells that are more sensitive to light than others. Biologists believe, in the words of Michael Shermer (God Is Not Great, Hitchens), that there was

Initially a simple eyespot with a handful of light-sensitive cells that provided information to the organism about an important source of the light; it developed into a recessed eyespot, where a small surface indentation filled with light-sensitive cells provided additional data on the direction of light; then into a deep recession eyespot, where additional cells at greater depth provide more accurate information about the environment; then into a pinhole camera eye that is able to focus an image on the back of a deeply-recessed layer of light sensitive cells; then into a pinhole lens eye that is able to focus the image; then into a complex eye found in such modern mammals as humans.

Earth has creatures with no eyes, creatures with “a handful of light-sensitive cells,” and all the other stages of eye development, right up to our complex camera eye. Given this, there is no reason to believe the evolution of the eye is impossible. As creatures evolved from lower lifeforms, there were slight variations in their ability to detect light, which proved useful for many, which helped creatures survive, which passed on the variations to offspring. This is how life can go from simple to complex over the generations. See The Evidence for Evolution, Alan Rogers, pp. 37-49, for a detailed study.

While the natural process has yet to be observed by humans — it takes eons, after all — we are able to create computer models that mimic beneficial mutations. Dan-Eric Nilsson and Susanne Pelger at Lund University in Sweden, for instance, made a simulation wherein a group of light-sensitive cells on top of a retina experienced random mutations in the tissues around them. The computer was programmed to keep mutations that improved vision in any way, no matter how small. So when the tissue pulled backward, for example, forming a “cup” for the primitive eye, this was preserved because it was an improvement. After 1,829 mutations (400,000 years), the simulation had a complex camera eye (Coyne). Computer models are a great tool for showing how evolution works. Simulations aren’t programed to build something complex, only to follow the simple laws of natural selection. Check out Climbing Mount Improbable by Richard Dawkins for more.


Strange Coincidences


Homologous limbs. via University of California Museum of Paleontology

While the study of homologous structures is fascinating, most won’t impress creationists. Humans, bats, birds, whales, and other creatures all have a humerus, radius, ulna, carpals, metacarpals, and phalanges in their forelimbs, with simple variations in size and sometimes number, suggesting they are related via a common ancestor yet have changed, evolved. But the creationist can simply say a sensible deity created them with similar structures. 

Yet there are some coincidences and oddities that no serious person would call intelligent design, and in fact scream common ancestry.

Modern whales have tiny leg bones inside their bodies that are detached from the rest of the skeleton. We humans have three muscles under our scalps that allow some of use to wiggle our ears, which do nothing for our hearing but are the precise same muscles that allow other animals to turn their ears toward sounds. Goosebumps, now worthless, are vestiges of an era when our ancestors had fur. Our sinus cavities, behind our cheeks, have a drainage hole on top — our ancestors walked on all fours, and thus the location made sense, allowing better drainage. Cave salamanders have eyes but are totally blind. Koalas, which spend most of their time in trees, have pouches for their young that open up-side-down — their ancestors were diggers on the ground, so this was useful to protect young from dirt and rock thrown about, but now threatens to allow koala cubs to plunge from trees (The Greatest Show on Earth, Richard Dawkins).

Even more astonishing, within the neck of Earth’s mammals, the vagus/recurrent laryngeal nerve, instead of simply going the short distance from your brain to your voicebox, extends from the brain, goes down into your chest, twists around your aortic arch by the heart, and then travels back up to the voicebox! It’s three times longer than necessary.

Incredibly, this same lengthy, senseless detour is found in other mammals, even the towering giraffe, in which it is fifteen to twenty feet longer than needed (see evolutionist Richard Dawkins cut one open and look here). In fish, which evolved earlier than us, the nerve connects the brain to the gills in a simple, straightforward manner (Coyne). This indicates our common ancestors with fish did not have this issue, but our common ancestors with other, later species did. As our mammalian ancestors evolved, the nerve was forced to grow around other developing, growing, evolving structures.

Human males have another interesting detour. As explained by Dawkins, the vans deferens, the tube that carries sperm from testes to penis, is also longer than necessary — and indeed caught on something. The vans deferens leaves the testes, travels up above the bladder and loops around the ureter like a hangar on a department store rack. It then finally finds its target, the seminal vesicle, which mixes secretions with the sperm. Then the prostate adds more secretions, finalizing the product (semen), which ejaculates via the urethra. The vans deferens could go straight to the seminal vesicle (under instead of around the bladder and ureter), but it doesn’t.

This same trait is found in other male mammals, like pigs. Creatures like fish again do not have this mess. Our ancestors had testes within the body, like many modern species, and as they descended toward the scrotum, toward the skin for cooler temperatures, the wiring got caught on the ureter. Perhaps one could see an intelligent (?) designer having to jam some things together to make them work — a detour for the van deferens here, another for the recurrent laryngeal nerve there — in one species. But in mammals across the board? How does that make more sense than all this being the imperfect byproduct of mindless evolution over time?    


via Laryngopedia


via Anatomy-Medicine

And it doesn’t end there. Vertebrates (species that have a backbone) like us happen to have eyes with retinas installed backward. Rogers writes:

The light-sensitive portion of the retina faces away from the light… The nerves, arteries, and blood vessels that serve each photocell are attached at the front rather than the back. They run across the surface of the retina, obscuring the view. To provide a route for this wiring, nature has poked a hole in the retina, which causes a substantial blind spot in each eye. You don’t notice these because your brain patches the image up, but that fix is only cosmetic. You still can’t see any object in the blind spot, even if it is an incoming rock.

But cephalopods (squid, octopi, and other advanced invertebrates) have a more sensible set-up, with wiring in the back (Rogers). Guess what kind of creature appeared on this planet first? Yes, the invertebrates. These coincidences and bad engineering suggest that as life evolved to be more complex there were greater opportunities for messy tangles of innards.

The best creationists can do is declare there are good reasons for these developments, that evolutionists “fail to demonstrate how this detour…disadvantages the male reproductive system” for example, which is completely beside the point. There were indeed biological reasons behind the development of these systems, which served as an advantage, not a hindrance (breaking the vans deferens or recurrent laryngeal nerve to let other organs grow and evolve would not be good for survival). The point is that if some species share this trait, it hints at a common ancestor.

So does embryology, the study of development in the womb. The field of genetics, which we explore further in the next section, helped us discover dead genes or pseudo genes in lifeforms. These are genes that are usually inactive but carry traits that, if developed, would be viewed as abnormal. In light of evolution it makes sense that we still have them. And sometimes dead genes wake up.

Humans have just under 30,000 genes, with over 2,000 of them pseudo genes. We have dead genes for growing tails, for instance. We all have a coccyx, four fused vertebrae that make up the end of our spine — four vertebrae that are larger and unfused in primates, thus being the base of their tails (Coyne). Not only are some humans born with an extensor coccygis, the muscle that moves the tail in primates but is worthless in us due to our vertebrae being fused, some people are born with a tail anywhere from one inch long to one foot! It has to be surgically removed.


Arshid Ali Kahn, born in India in 2001, was worshiped as a reincarnation of the Hindu monkey god Hunaman. He had his tail removed in 2015. via Mirror

In fact, all human embryos begin with a fishlike tail, which is reabsorbed into the body around week seven. We develop a worthless yolk sac that is discarded by month two, a vestige of reptilian ancestors that laid eggs containing a fetus nourished with yolk. We develop three kidneys, the first resembling that of fish, the second resembling that of reptiles; these are also discarded, leaving us with our third, mammalian version. From month six to eight, we are totally covered in a coat of hair (lanugo) — primates develop their hair at the same stage, only they keep it. These marvels exist in other life, too. Horse embryos begin with three-toed hooves, then drop to one; they descended from creatures with more than just one toe. Occasionally, a horse is born with more than one hoof, or toe, on each foot (polydactyl horse), similar to its ancestors. Birds carry the genes necessary to grow teeth, minus a single vital protein; they descended from reptiles with teeth. Dolphin and whale embryos have hindlimb buds that vanish later; baleen whale embryos begin to develop teeth, then discard them (Coyne).


Premature infants still have some of their lanugo coat. They will soon lose it. via Mipediatra

Quite interesting that God would give us genes to grow tails and fur.

Our fetal development, you likely noticed, actually mimics the evolutionary sequence of humanity. This is most noticeably true with our circulatory system, which first resembles that of fish, then that of amphibians, then that of reptiles, then finally develops into our familiar mammalian circulatory system (Coyne). Strange coincidences indeed.

But there are more. As one would expect if evolution occurred, fossils of creatures found in shallower rock more closely resemble species living today; fossils found in deeper, older sedimentary layers are more different than modern life. This pattern has never been broken by any fossil discovery, and supports Darwin’s idea (Coyne).

Similarly, consider islands. The species found on islands consistently resemble those on the nearest continent. This at first does not sound surprising, as one would predict that life (usually birds, insects, and plant seeds) that colonized islands would do so from the closest landmass. But the key word is “resemble.” What we typically see are a few species native to a continent (the ancestors) and an explosion of similar species on the nearby islands (the descendants). Hawaii has dozens of types of honeycreepers (finches) and half the world’s 2,000 types of Drosophila fruit flies; Juan Fernandez and St. Helena are rich in different species of sunflower; the Galapagos islands have 14 types of finches; 75 types of lemurs, living or extinct, have been documented on Madagascar, and they are found nowhere else; New Zealand has a remarkable array of flightless birds; and Australia has all the world’s marsupials, because the first one evolved there. To the evolutionist, a tight concentration of similar species on islands (and individual islands having their own special species) is the result of an ancestral explorer from a nearby landmass whose descendants thrived in a new environment unprepared for them (a habitat imbalance), reproducing and evolving like crazy. Thus a finch on a continent has a great number of finch cousins on nearby islands — like her but not the same species (Coyne). Darwin himself, still a creationist at the time, was shocked by the fact that each island in the Galapagos, most in sight of each other, had a slightly different type of mockingbird (Rogers).

To the creationist, God simply has an odd affinity for overkill on islands.


Shared DNA

In the 20th century, geneticists like Theodosius Dobzhansky synthesized Darwin’s theory with modern genetics, showing how the random, natural mutation of genes during the copying of DNA changes the physiology of lifeforms (should that altered state help a creature survive, it will be passed on to offspring). The study of DNA proved once and for all that Darwin was right. By mapping the genetic code of Earth’s lifeforms, scientists determined — and continue to show — that all life on Earth shares DNA.

DNA is passed on through reproduction. You get yours from your parents. You share more DNA with your parents and siblings than you do with your more distant relatives. In the same way, humans share more DNA with some living things than with others. We share 98% with chimps, 85% with zebra fish, 36% with fruit flies, and 15% with mustard grass. By share, we mean that 98% of DNA base pairs (adenine, guanine, cytosine, and thymine) are in the precise same spot in human DNA compared to chimp DNA. (These four nucleobases can be traded between species. There is no difference between them — we’re all made of the same biochemical stuff.) 

It is not surprising that creatures similar to us (warm-blooded, covered in hair, birth live young, etc.) are closer relatives than less similar ones. Evolutionary biologists used to use appearance and behaviors (such as gills or reproductive method) to suppose creatures were related, like the trout and the shark or the gorilla and the human being. But DNA now confirms the observations, as trout DNA is more similar to shark DNA than, say, buffalo DNA, and gorilla DNA is more similar to human DNA than, say, fruit fly DNA. 

But all life shares DNA, no matter how different (for a deeper analysis, see Rogers pp. 25-31, 86-92). That simple truth proves a common forefather. A god would not have to make creations with chimp and human DNA nearly the same, all the nucleobases laid out in nearly the same order; why do so, unless to suggest that evolution is true? When mapped out by genetic similarity, we see exactly what Darwin envisioned: a family tree with many different branches, all leading back to a common ancestor.  


Our tree of life. Click link in text above to zoom. via Evogeneao


Transitional Forms

Darwin predicted we would find fossils of creatures with transitional characteristics between species, for example showing how lifeforms moved from water to land and back again. Unfortunately, the discovery of such fossils has done nothing to end the debate over evolution. 

For instance, as transitional fossils began to accumulate, it became even more necessary to attack scientific findings on Earth’s age. If you can keep the Earth young, evolution has no time to work and can’t be true. So, as mentioned, creationists insist radiometric dating is flawed. Rocks cannot be millions of years old, thus the fossils encased within them cannot either. This amounts to nothing more than a denial of basic chemistry. Rocks contain elements, whose atoms contain isotopes that decay into something else over time at constant rates. So we can look at an isotope and plainly see how close it is to transformation. We know the rate, and thus can count backward. If researchers only had a single isotope they used, perhaps creationist would have a prayer at calling this science into question. But rubidium becomes stronium. Uranium changes to lead, potassium to argon, samarium to neodymium, rhenium to osmium, and more (see Rogers pp. 73-80 to explore further). This is something anyone devote study to, grab some rocks, and measure themselves. All creationists can do is say we aren’t positive that “the decay rate has remained constant”! Can you imagine someone saying that during Isaac Newton’s time gravity’s acceleration wasn’t 9.8 meters per second squared? Anyone can make stuff up!

(You’ll find most denials of evolution rest on denials or misunderstandings of the most basic scientific principles. Some creationists insist evolution is false because it betrays the Second Law of Thermodynamics, which states that the energy available for work in a closed system will decrease over time — that things fall apart. So how could simple mechanisms become more complex? How could life? What they forget is that the Earth’s environment is not a closed system. The sun provides a continuous stream of new energy. Similarly, some believe in “irreducible complexity,” the idea that complex systems with interconnected parts couldn’t evolve because one part would have no function until another evolved, therefore the first part would never arise, and thus neither could the complex system. But the “argument from complexity” fails per usual. [Other arguments, such as the “watchmaker” and “747” analogies, are even worse. Analogy is one of the weakest forms of argument because it inappropriately pretends things must be the same. No, a watch cannot assemble itself. That does not mean life does not evolve. Analogies fighting evidence are always doomed.] Biologists have discovered that parts can first be used for other tasks, as was determined for the bacterial flagellum, the unwise centerpiece of creationist Michael Behe’s skepticism. Independent parts can evolve to work together on new projects later on. Rogers writes:

Many hormones fit together in pairs like a lock and key. What good is the lock without the key? How can one evolve before the other? Jamie Bridgham and his colleagues studied one such pair and found that the key evolved first — if formerly interacted with a different molecule. They even worked out the precise mutations that gave rise to the current lock-and-key interaction.

A part of this process is sometimes scaffolding, where parts that helped form a complex system disappear, leaving the appearance that the system is too magical to have arisen. The scaffolding required to build our bridges and other structures is the obvious parallel.)

Let’s consider the fossils humanity has found. Tiktaalik was a fish with transitional structures between fins and legs. “When technicians dissected its pectoral fins, they found the beginnings of a tetrapod hand, complete with a primitive version of a wrist and five fingerlike bones… [It] falls anatomically between the lobe-finned fish Panderichthys [a fish with amphibian-like traits], found in Latvia in the 1920s, and primitive tetrapods like Acanthostega [an amphibian with fish-like traits], whose full fossil was recovered in Greenland not quite two decades ago.” Tiktaalik had both lungs protected by a rib cage and gills, allowing it to breath in air and water, like the West African lungfish and other species today. Its fossil location was actually predicted, as researchers knew the age and freshwater environment such a missing link would have to appear in (Coyne).

Ambulocetus had whale-esque flippers with toes (Rodhocetus is similar). Pezosiren was just like a modern manatee but had developed rear legs. Odontochelys semitestacea was an aquatic turtle with teeth. Darwinius masillae had a mix of lemur traits and monkey traits. Sphecomyrma freyi had features of both wasps and ants. Archeopteryx was more bird-like than other feathered dinosaurs (that’s feathered reptiles), yet not quite like modern birds. Its asymmetrical feathers suggest it could fly. The Microraptor gui, a dinosaur with feathered arms and legs, could likely glide. Other featured dinosaurs were found fossilized sleeping with their head tucked under their forearm or sleeping on a nest of eggs, just like modern birds (Coyne; see also Dawkins pp. 145-180).

Australopithecus afarensis, Australopithecus africanus, Paranthropus, Homo habilis, Homo erectus, and many more species had increasingly modern human characteristics. Less and less like a primate, closer and closer to modern Homo sapiens. Fossils indicate increasing bipedality (walking upright on two legs), smaller jaws and teeth, increasingly arching feet, larger brains, etc.


A: chimp skull. B-N: transitional species from pre-human to modern human. via Anthropology

It doesn’t stop there, of course. Evolution can been seen in both the obvious and minuscule differences between species.

See for example “From Jaw to Ear” (2007) and “Evolution of the Mammalian Inner Ear and Jaw” (2013). It was theorized that three important bones of a mammal’s ear — the hammer, anvil, and stirrup — were originally part of the jaw of reptilian ancestors (before mammals existed). In modern mammals there is no connecting bone between the jaw and the three inner-ear bones, but if there was an evolution from reptilian jaw bone to mammalian inner-ear bone, fossils should show transitional forms. And they do: paleontologists have found fossils of early mammals where the same bones are used for hearing and chewing, as well as fossils where the jaw bones and inner-ear bones are still connected by another bone.

Creationists have a difficult time imagining how species could evolve from those without wings to those with, from those that live on land to water-dwellers, from aquatic lifeforms back to land lovers, and so on, because they believe intermediary, transitional traits would be no good at all, could not help a creature survive. “What good is half a wing?”

Yet today species exist that show how transitional traits serve creatures well. Various mammals, marsupials, reptiles, amphibians, fish, and insects glide. It is easy to envision how reptiles could have evolved gliding traits followed by powered flight over millions of years. Or consider creatures like hippos, which are closely related to and look like terrestrial mammals but spend almost all their days underwater, only coming ashore occasionally to graze. They mate and give birth underwater, and are even sensitive to sunburn. Give it eons, and couldn’t such species change bit by bit to eventually give up the land completely? The closest living genetic relative to whales are in fact hippos (Coyne). And finally, what of the reverse? What of ocean creatures that head to land?Crocodiles can gallop like mammals (up-down spine flexibility) as well as walk like lizards (right-left spine flexibility; see Dawkins). The mangrove rivulus, the walking catfish, American eels, West African lungfish, four-eyed fish, snakeheads, grunions, killifish, the anabas, and other species leave the waters and come onto land for a while, breathing oxygen in the air through their skin or even lungs, flopping or slithering or squirming or walking to a new location to find mates, food, or safety. Why is it so difficult to imagine a species spending a bit more time on land with each generation until it never returns to the water?

“Half a wing” is not a thing. There are only traits that serve a survival purpose in the moment, like membranes between limbs for gliding. Traits may develop further, they may remain the same, they may eventually be lost, all depending on changes in the environment over time. Environment (food sources, mating options, predators, habitability) drive evolutionary changes differently for all species. That’s natural selection. When some members of a species break away from the rest (due to anything from mudslides to migration to mountain range formation), they find themselves in new environments and evolve differently than their friends they left behind. Coyne writes, “Each side of the Isthmus of Panama, for example, harbors seven species of snapping shrimp in shallow waters. The closest relative of each species is another species on the other side.” Species can change a little or change radically, unrecognizably, but either way they can be called a new species — in fact, unable to reproduce with their long-lost relatives, because their genes have changed too greatly. That’s speciation.

There is no question that the fossil record starts with the simplest organisms and, as it moves forward in time, ends with the most complex and intelligent — all beginning in the waters but not staying there. Single-cell organisms before multicellular life. Bacteria before fungi, protostomes before fish, amphibians before reptiles, birds before human beings.

If they wish, creationists can believe the fossil record reflects the chosen sequence of a logical God, even if it does not support the Judeo-Christian creation story (in which birds appear on the same “day” as creatures that live in water, before land animals; the fossil record shows amphibians, reptiles, and mammals appearing before birds — and modern whales, being descendants of land mammals, don’t appear until later still, until after birds, just 33 million years ago). Yet they must face the evidence and contemplate what it indicates: that a deity created fish, then later fish with progressively amphibious features, then later amphibians; that he created reptiles, then later reptiles with progressively bird-like features, then birds; and so forth. No discovery has ever contradicted the pattern of change slowly documented since Darwin. God is quite the joker, laying things out, from fossils to DNA, in a neat little way to trick humans into thinking we evolved from simpler forms (note: some creationists actually believe this).

Yes, the believer can simply claim these were all their own species individually crafted by God, with no ancestors or descendants who looked or acted any different. The strange fact that we have birds that cannot fly and mammals in the oceans that need to come up to the surface for air doesn’t engage the kind of critical thinking one might hope for. It’s all just a creative deity messing with animals!


Watching Evolution Occur


Renee, an albino kangaroo at Namadgi National Park, Australia. via Telegraph

Most creationists are in fact quite close to accepting evolution as true.

First, they accept that genes mutate and can change an individual creature’s appearance. They know, for instance, about color mutations. We’re talking albinism, melanism, piebaldism, chimeras, erythristics, and so on.

Second, most creationists accept what they call “microevolution”: mutations help individuals survive and successfully reproduce, passing on the mutation, changing an entire species generation by generation in small ways, but of course not creating new species. They accept that scientists have observed countless microevolutionary changes: species like tawny owls growing browner as their environments see less snowfall, Trinidad guppies growing larger, maturing slower, and reproducing later when predators are removed from their environments, green anole lizards in Florida developing larger toepads with more scales to escape invaders, and more, all within years or decades. They understand evolution is how some insects adapt to pesticides and some viruses, like HIV and TB, adapt to our vaccines over time, and how we human beings can create new viruses in the lab. They acknowledge that humanity is responsible, through artificial selection, or selective breeding, for creating so many breeds of dogs with varying appearances, sizes, abilities, and personalities (notice the greyhound, bred for speed by humans, closely resembles the cheetah, bred for speed by natural selection). In the same way, we’ve radically changed crops like corn and farm animals like turkeys (who are now too large to have sex), and derived cabbage, broccoli, kale, cauliflower, and brussels sprouts from a single ancestral plant, to better sate our appetites, simply by selecting individuals with traits we favor and letting them reproduce.


Wild banana (below) vs. artificially selected banana. via NBC News

The evidence presented thus far should push open-minded thinkers toward the truth, but for those still struggling to make the jump from microevolution to evolution itself, we are not done yet. The resistance is understandable given that small changes can easily be observed in the lab or nature, but large changes require large amounts of time — thousands, millions of years — and thus we mostly (but not entirely) have to rely on the evidence from DNA, fossils, embryology, and so on. Here are some points of perspective that can bridge the gap between small changes and big ones.

1. Little changes add up. If you accept microevolution, you accept that species can evolve to be smaller or bigger, depending on what helps them survive and reproduce. Scott Carroll studied soapberry bugs in the U.S. and observed some colonizing bigger soapberry bushes than normal; he predicted they would also grow larger, as larger individuals would be more successful at reaching fruit seeds. Over the course of a few decades, the bugs’ “beak” length grew 25%. That’s significant. Now imagine what could theoretically be done with more time. As Coyne writes, “If this rate of beak evolution was sustained over only ten thousand generations (five thousand years), the beaks would increase in size by a factor of roughly five billion…able to skewer a fruit the size of the moon.” This is unlikely to happen, but shows how little changes later yield dramatic results. Imagine traits other than size — all possible traits you can think of — changing at the same time and evolution doesn’t sound so impossible.

2. Genes are genes. This relates closely to the point above. If some genes can mutate, why can’t others? Genes determine everything about every creature. People who believe in microevolution accept that genes for size or color can change, but not genes for where your eyes are, whether you’re warm- or cold-blooded, whether you have naked skin or a thick coat of fur, whether you have a hoof or a hand, and so on. But there is no scientific basis whatsoever for this dichotomy of the possible. It’s simply someone claiming “These genes can mutate but not these, end of story” to protect the idea of intelligent design. Genes are genes. They are all simply sequences of nucleotides. As far as we know, no gene is safe from mutation.


Octogoat, a goat with eight legs, born in Kutjeva, Croatia. via ABC News

3. Mutations can be huge. We’ve seen how humans can have tails, but we also see “lobster claw hands,” rapid aging, extra limbs, conjoined twins, and other oddities. Consider other mutations: snakes with two heads, octopi with only six tentacles, ducks with four legs, cats with too many toes. For the common fruit fly, the antennapedia mutation will mean you get legs where your antenna are supposed to be! Dramatic mutations are possible. Survival is possible. Passing on new, weird traits is possible. With evolution, sometimes groups with new traits totally displace and eliminate the ancestral groups; sometimes they live side-by-side going forward. If you came across a forest and discovered one area was occupied by two-headed snakes and another by single-headed snakes, all other traits being the same, wouldn’t you be tempted to call them different species? Declare something new had arisen on Earth?

4. We are currently watching evolution occur. Scientists have observed speciation. They’ve taken insects, worms, and plants, put small groups of them in abnormal environments for many generations, and then seen they can no longer reproduce with cousins in the normal environments because they have evolved. It’s easy to create new species of fruit flies in particular because their generations are so short. Evolution for other species is typically much slower, but significant changes are being observed.

Say you were instead on the African Savanna and came upon two groups of elephants. They are the same but for one startling difference: one group has no tusks. Like two-headed snakes, what a bold difference in appearance! Should we classify them as different species or the same? (Technically, they aren’t different species if they can still reproduce offspring together, but in the moment you aren’t sure.) Well, African elephants are increasingly being born without tusks. After all, those without are less likely to be killed by poachers for ivory. This is natural selection at work. Could not a changing environment and millions of years change more? Size, color, skin texture, hair, skeletal layout, teeth, and all other possible traits determined by all other genes?

Next, take a remarkable experiment involving foxes launched by Dmitry Belyaev and Lyudmila Trut in the Soviet Union in the late 1950s, which Trut is still running to this day. No, we can’t watch a species for 500,000 years to see dramatic evolution in action. But 60 years gives us something.

At the time, biologists were puzzled as to how dogs evolved to have different coats than wolves, since they couldn’t figure out how the dogs could have inherited those genes from their ancestors. Belyaev saw silver foxes as a perfect opportunity to find out how this happened. Belyaev believed that the key factor that was selected for was not morphological (physical attributes), but was behavioral. More specifically, he believed that tameness was the critical factor.

In other words, Belyaev wanted to see if foxes would undergo changes in appearance if they evolved different behaviors. So Belyaev and Trut set about taming wild silver foxes.


Wild silver fox. via Science News

They took their first generation of foxes (which were only given a short time near people) and simply allowed the least aggressive to breed. They repeated this with every generation. They had a control group that was not subjected to selective breeding.

The artificial selection of course succeeded for fox behavior. They became much more open to humans, whining for attention, licking them, wagging their tails when happy. But there was more:

A much higher proportion of experimental foxes had floppy ears, short or curly tails, extended reproductive seasons, changes in fur coloration, and changes in the shape of their skulls, jaws, and teeth. They also lost their “musky fox smell.”

Spotted coats began to appear. Trut wrote that skeletal changes included shortened legs and snouts as well. Belyaev said they started to sound more like dogs (Dawkins). Geneticists are now seeking to isolate the genes related to appearance that changed when selectively breeding for temperament.

Belyaev was right. And his foxes, through evolution, came to look more and more like dogs. This is the same kind of path that some wolves took when they evolved into dogs (less aggressive wolves would be able to get closer to humans, who probably started feeding them, aiding survival; tameness increased and physical changes went with it).

If such changes can occur in just 60 years, imagine what evolution could do with a hundred million years.


Dr. Lyudmila Trut with domesticated silver fox. via WXXI


In the Beginning

It’s true, scientists are still unsure how life first arose on Earth. And because it is an enduring mystery without hard evidence, scientists with hypotheses and speculations openly acknowledge this. Note that’s a big difference compared to evolution, which scientists speak confidently about due to the wealth of evidence.

But one professor at MIT believes that far from being unlikely, nonliving chemicals becoming living chemicals was inevitable.

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat… When a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Researchers have discovered lipids, proteins, and amino acids beneath the seafloor, suggesting the chemical interaction between the mantle and seawater could produce the building blocks of life. From there, time and proper conditions could give rise to the first self-replicating molecule. Evolution would then continue on, spending billions of years developing the diverse flora and fauna we see today (a single cell leading to complex life under the right conditions should not be so shocking; as J.B.S. Haldane said, “You did it yourself. And it only took you nine months”).

Determining precisely how the first cell arose is the next frontier of evolutionary biology, and it is exciting to be here to witness the journey of discovery. New findings and experiments will wipe away “watchmaker” arguments used against the first cell. They will once again crush the “God gap,” the bad habit of the faithful to fill gaps in our scientific knowledge with divine explanations. I imagine in our lifetime someone will successfully complete Stanley Miller’s famous 1950s experiment, in which he tried to recreate the Earth’s early conditions and create life itself.

Yet lack of knowledge concerning the beginning of life in no way hurts the case for evolution. Evolution is proven, as definitively as whether the earth orbits the sun.

Why I Am Not a Communist (Nor an Anarchist)

Having criticized the authoritarian communist states that arose in the 20th century, in particular the Bolsheviks in Russia for crushing worker power, and having also explored the basic tenets of anarchism (and how it is the father of the blasphemous bastard child that is anarcho-capitalism/libertarianism), I wanted to devote some time to musing over the merits of communism and anarchism relative to socialism.

While all anti-capitalist, these ideologies are not the same and should not be confused. I therefore include basic outlines (leaving out many different subtypes of each) before considering their relative advantages and downsides. I attempt to present each in their most ethical, idealized form (most free, most democratic, and so forth). Criticisms of ideologies should not be mistaken as disrespect for my Marxist comrades who think differently.

Communism destroys capitalism from the top-down. The government, as an instrument of the people, owns all workplaces and organizes the economy and the workers according to a central plan that meets citizen needs. Under this system, competition can be wholly and more easily eliminated, making the enormous pressure to put profits over people a thing of the past. Wasteful and redundant production goes away with it, meaning more workers and resources for more important tasks that build a better society (for example, no more energy and billions spent on advertising, instead diverted to education). Further, the national wealth can be easily divided up among the people, public sector salaries enriching all.

However, communism entails enormous challenges. It surely requires giving up the full freedom to choose your line of work – if your community or national plan only allows for a certain number of bookstores or bookstore workers, there may not be room for you. You would be rejected upon applying with the local or national government to open a new bookstore (as you would surely have to do for a plan, and thus communism, to function) or upon applying for a job at an established bookstore. Under communism, workers are supposed to “own” their workplaces because they “own” the State, but this is a rather indirect form of control that leaves some people wanting. You may have options regarding the work you do, but you will have to sacrifice your interests for the sake of the plan.

Of course, as long as you don’t find yourself under authoritarian communism, you would help decide the plan, at the ballot box. But how much would you help? That raises a second challenge: can communism function without representative government (or a worse concentration of power)? A common notion is that the workers, the people, would elect members of their worker councils to participate in the design and execution of the national plan (or elect representatives from their geographic community, as is done in politics today). So if you worked in auto manufacturing while waiting for a bookstore job to open up, you would run or elect someone for the honor and task of representing the American Auto Workers Council on the National Planning Committee. The representatives, using a broad array of data on what goods and services are need where, and what resources and workers will be needed to create and distribute, would craft a central plan for a certain number of years.

Can this enormous power be socialized further? We understand the risks of representative governance – concentrated power is more easily influenced and corrupted, and doesn’t give people a direct say over their destinies. We could see to it that the people have a direct up or down vote on the plan after the representatives craft it (or other checks and balances). But eliminating a representative structure entirely seems impossible. Imagine the daunting task of voting on how much corn the U.S. should grow in a given three-year period. On how many more workers are needed to produce a higher number of epipens. On how many homes should be built in a city on the other side of the country. (It very much seems that you must make this vote on national matters, rather than simply voting on what your local community needs. If each municipality democratically decided what they needed, these decisions would have to be reconciled at the national level, as there may not be the resources to do everything every community decides to do. Like the would-be bookstore worker, some communities will not get what they wanted, making the vote a sham. And, naturally, trying “communism” at local levels, where communities can only use the workers and resources within their communities, leaves massive inequities between regions. It might be possible to instead divide up the national wealth to each region somewhat according to its need and then let each decide how to use its allotted funds, but how much each city or town should get would also be impossible to sensibly sort out using direct democracy.)

Organizing an economy is a monumental task requiring mountains of accurate, up-to-date data. How difficult for an elected body of experts – a full-time job with a high risk of costly mistakes and turmoil. Can workers devote the time and study to make educated decisions on what to produce, their quantities, prices, and required manpower and resources, for an entire country? Would not voting itself, on thousands or hundreds of thousands of economic details, take days, weeks, or months? And if the people cannot be expected to plan the economy via direct vote, how can they be expected to make an informed up-down vote on a plan formulated by others? There seems to be no escaping representative government with communism. These challenges suggest this system may not be preferable.

Anarchism does away with capitalism from the bottom-up. Workplaces would be owned and run by workers, would federate to coordinate activities rather than compete, and local communities would make all decisions democratically. The State, as a hierarchical structure like capitalism, would be abolished. In this way, people would be free as possible from compulsion, authority, and concentration of power, enjoying individual freedoms as long as they do not hurt others. You’d have equal power to make decisions that affect you, joining in your local citizen assembly and worker council. Anarchism harkens back to the era of “primitive communism” we explored elsewhere.

Anarchists have differing views on whether capitalism can be dismantled after the State. Does the State have a vital role to play in capitalism’s eradication? Anarchist H.G. Wells, among others, thought only socialism could make anarchism possible:

Socialism is the preparation for that higher Anarchism; painfully, laboriously we mean to destroy false ideas of property and self, eliminate unjust laws and poisonous and hateful suggestions and prejudices, create a system of social right-dealing and a tradition of right-feeling and action. Socialism is the schoolroom of true and noble Anarchism, wherein by training and restraint we shall make free men.[1]

The challenge with anarchism is that, like “local communism,” it leaves communities to fend for themselves, meaning poorer peoples beside richer ones. Unless, of course, communities worked together, sharing workers and resources, in a movement toward the integration of larger and larger units and the necessary joint administration (however democratic), weakening local control and journeying down the path toward what are essentially nations. Further, if you avoided that, while a spirit of human oneness could indeed rise with the disappearance of nations, one wonders what is to stop factionalism based on community identity. Is pride and loyalty to a neighborhood, town, or city not predictable? One worries about true global solidarity. In the same vein, individual anarchist communities seem vulnerable to rivalry and conflict, especially if they differ in wealth, habitability, and so on. It all sounds a bit like the city-states of ancient Greece, albeit less capitalistic and more democratic. At the least, such a world seems more prone to conflict than one with a single government spanning all continents and meeting the needs of all people. Some form of State may be preferred for its ability to protect people.

Skeptics of anarchism may also see that statement as the answer to the question of crime, which, while being greatly reduced, is not likely to disappear entirely with the abolition of poverty (think of crimes of passion over infidelity, for instance). Yet anarchists typically despise the police – the personification of force, authority, and State violence. Can the police be made a thing of the past?

Socialist George Orwell wrote, “I worked out an anarchistic theory that all government is evil, that the punishment always does more harm than the crime and the people can be trusted to behave decently if you will only let them alone.” But he concluded, “It is always necessary to protect peaceful people from violence. In any state of society where crime can be profitable you have got to have a harsh criminal law and administer it ruthlessly.”[2]

Here Orwell lacks nuance and vision – of community policing, proportionate punishment, restorative justice, rehabilitation, and so on – which do not require a State; they can be done on an intimate, local level. Skeptics can rest easy on this point. The relevant task of anarchism (and socialism or communism) is to build a more humane, peaceful, fair criminal justice system that does not morph into what came before.

Then there’s socialism. “I should tie myself to no particular system of society other than that of socialism,” as Nelson Mandela would say.[3] Socialism also eliminates capitalism from the bottom-up. As under anarchism, workers collectively own their workplaces, making decisions democratically and equitably sharing the profits of their labor, and such worker cooperatives can federate with each other to reduce competition and coordinate their creations and service. The State exists to serve various needs of the people, such as guaranteed healthcare and employment, and is in fact under the people’s direct democratic control (this was explored in detail in What is Socialism?). The problems with anarchism and communism can be avoided. Socialism is the human future.



[1] New Worlds for Old, H.G. Wells

[2] The Road to Wigan Pier, George Orwell

[3] 1964 court speech, Nelson Mandela.

The Scope of False Sexual Assault Allegations

When conservatives are confronted by the rise of a “liberal” cause, many find and point to a small problem in order to discredit or divert attention from the immense problem liberals are attacking.

It’s an unhealthy mix of the whataboutism fallacy (citing wrongs of the opposing side instead of addressing the point) and the false equivalence fallacy (describing situations as equivalent [I’ll add “in scope”] when they are not). We observe this during talk on racial violence, when many conservatives pretend hate crimes against whites are just as common as hate crimes against people of color; see “On Reverse Racism.”

Lately, the fallacy was on full display as high-profile men across the country were accused of sexual assault and harassment, many fired or urged to resign. In this frenzy of allegations, some Americans see and cheer a surge in bravery and collective solidarity among victims inspired by each other and seeking justice, while others see and decry a male “witch hunt,” with evil women growing more bold about their lies, perhaps on the George Soros payroll. Where you land is a fairly decent predictor of your political views. Who was accused also determined for many which women to believe, with some conservatives supporting Republican Roy Moore through his rape of underage girls scandal but attacking Democrat Al Franken’s groping of women. Sadly, some liberals did the reverse. I know I witnessed a left-leaning acquaintance or two trying to discredit accusations against Franken, that he publicly apologized for, by slandering the victims. Still, it is typically conservatives (often sexually frustrated men) who, when they encounter liberals talking about rape, sexual assault, sexual harassment, toxic masculinity, and so forth, bring up false rape accusations.

One comment on a mediocre article Men’s Health shared on how to make sure you have consent from a woman typified this. There were of course countless like it, many poorly written: “And remember if she regrets it the next day you’re still fucked”; “I bring my attorney and a notary on all dates and hook ups”; “There’s no such thing as consent anymore, it’s a witch hunt. Just say no gentleman”; “Don’t forget guys… If you have drank 12 drinks and she has 1 sip of beer…… You raped her.” And still more angry with the article’s existence: “Men’s health turning into click bate leftist agenda”; “Did a feminist write this?”; “Did a woman write this?” It’s sad consent is a liberal, feminist scheme. But this comment got much attention and support, likely because people found it thoughtful and measured for some odd reason:

This is a touchy subject. Yes, respect women—We all know that. Have a woman’s consent—Yes, we all know that. Do not rape or sexually assault a woman—Yes we all know that. We respect the rules. However, there are some women that exploit and take advantage of the rules. It’s sad to say, there are some out there that falsely accuse a man of rape or sexual assault—ruining their lives. Being a man in today’s era, I’m afraid to ask a woman on a date. I feel sometimes a man needs a contract just to protect himself. Yes, this might sound objectionable and supercilious—but you can’t be too careful nowadays. We live in a different time now. Men: We need to change our attitudes and treatment of women. However, it’s okay that we protect ourselves—and we shouldn’t be demonized or vilified for doing so. I don’t want to be viewed or portrayed as the enemy, nor be apologetic for being a man.

An amusing writing. “We all know” not to rape, assault, or harass women? If the collective male “we” legitimately “knew,” such things would be a thing of the past and a primer on consent unnecessary. “We live in a different time” where men are “afraid to ask a woman on a date”! If you’re going to “protect” yourself in some way, you wouldn’t be “demonized” for actually getting consent in some formal sense; only if you used illegal and unethical methods to “protect” yourself, like the secret filming of sex. And where are these women asking men to apologize for being a men, rather than for specific behaviors or attitudes that make them uncomfortable, scared, unsafe, or physically violated?

This is a perfect example of the fallacy above. “Men sexually assault women and shouldn’t, but what about the women who make false accusations?” The latter part is clearly his main concern — he didn’t stop by to condemn rapists, he came with another purpose. They may not intend to or even realize it (some do), but when men (or women) do this they position false reports as a problem of the same significance or nearing the same significance as actual sex crimes. As if the scope, the prevalence, is comparable. That’s what taking a conversation on consent and redirecting it to one of false accusations does. It says, “This is what’s important. It’s what we should be talking about.” It’s like bringing up asthma when everyone’s discussing lung cancer. It deflects attention away from a problem that is much more severe. It’s a subtle undermining of the credibility of rape victims. It’s not wrong to discuss small problems, of course, but they should always be kept in perspective. It’s my view that comments about hate crimes against whites or false accusations against men that don’t include the enormous asterisks that these are minuscule percentages of overall hate and sex crimes should never have been uttered at all. In that way, we can think about others first. We can protect the credibility of real victims. We can remain rooted in the facts — not imply a small problem is large, or vice versa. Naturally, including those caveats undermines the usual function of bringing up these issues, but no matter.

Yes, lying about sex crimes in an issue that exists. Yes, there should be some legal punishment for such an immoral act (not anywhere near the punishment for sexual assault and harassment, obviously, because these are not in any way morally equivalent crimes). Yes, people are innocent until proven guilty, which is why men are safe from prison until they see their day in court, even if they face social consequences like losing a job due to presumed guilt — which you can oppose on ethical grounds, but not so stable ground as you would hope, especially when a man is accused by a coworker, family member, or someone else in close proximity. Is it most ethical to oppose a firing until a trial and risk keeping a rapist around the workplace? Putting others in danger? Forcing a victim to clock in next to him each day? Or is it most ethical to fire him and risk tearing down the life of an innocent man? It’s an unpleasant dilemma for any employer, university administrator, or whomever, but ethically there’s not much question. One risk is far graver, thus the answer is simple. This only grows more axiomatic when we acknowledge the likelihood of events.

The prevalence of proven false accusations of sexual assault is somewhere between 2% and 8% of cases. The National Sexual Violence Research Center documents a 2006 study of 812 cases that found 2.1% were false reports, while a 2009 study of 2,059 cases and a 2010 study of 136 cases estimated 7.1% and 5.9%, respectively. Research from 2017 revealed a 5% false claim rate for rape. The Making a Difference Project, using data from 2008, estimates 6.8%. These numbers are mirrored in prior American decades and in similar countries. While we can acknowledge that some innocent people in prison never see justice, are never set free, since 1989 there have only been 52 men released from prison after it was determined their sexual assault charges were based on lies. This compared to 790 murder exonerations; the number of people in state prisons for murder vs. sexual assault/rape is about the same (though the former crime is far less common than the latter), making the low exoneration rate for sex crime convictions all the more significant.

Myriad definitions of both “false report” and “sexual assault” make the precise percentage difficult to nail down, and these statistics only address proven false reports (there are many cases in limbo, as conservative writers are quick to point out), but this research gives us a general idea. Reports of high percentages of false claims are typically not academic studies or have rather straightforward explanations, for example when Baltimore’s “false claim” rate plunged from 31% to under 2% when the police actually went through some training and “stopped the practice of dismissing rapes and sexual assaults on the scene”! It’s remarkable how legitimate investigations and peer-reviewed research can bring us closer to the truth.

In other words, when observing any sexual misconduct scandal, there is an extremely high chance the alleged victim is telling the truth. This is why we believe women. This is why they should be given the benefit of the doubt, not accused men. It’s why the moral dilemma for employers and the like is hardly one at all. Were precisely 50% of sexual assault allegations lies, it would still be most ethical to take the risk of firing a good man rather than the risk of keeping a predator around. But since women are most always telling the truth? Well, the decision is that much easier and ethical.

In the U.S., there are some 321,500 rapes and sexual assaults per year, and 90% of adult victims are women (you’ve probably noticed how “men are raped too” is used in a similar manner to all this). One in six women are rape or attempted rape survivors. For every 1,000 rapes, 994 perpetrators (99%) will never go to prison.

Which U.S. Wars Actually Defended Our Freedoms?

When pondering which of our wars literally protected the liberties of U.S. citizens, it is important to first note that war tends to eradicate freedoms. Throughout U.S. history, war often meant curtailment of privacy rights (mass surveillance), speech rights (imprisonment for dissent), and even the freedom to choose your own fate (the draft).

It also should be stated upfront that this article is only meant to address the trope that “freedom isn’t free” — that military action overseas protects the rights and liberties we enjoy here at home (even if virulent bigotry meant different people had very different rights throughout our history and into our present). It will not focus on the freedoms of citizens in other nations that the U.S. may have helped establish or sustain through war, nor non-American lives saved in other countries. However, it will address legitimate threats to American lives (such a right to life is not de jure, but expected).

As a final caveat, I do not in any way advocate for war. That has been made exceptionally clear elsewhere. While violence may at times be ethically justified, in the vast majority of cases it is not, for a broad array of reasons. So nothing herein should be misconstrued as support for imperialism or violence; rather, I merely take a popular claim and determine, as objectively as possible, if it has any merit. To a large degree I play devil’s advocate. To say a war protected liberties back home is not to justify or support that war, nor violence in general, because there are many other causes and effects to consider which will go unaddressed.

In “A History of Violence: Facing U.S. Wars of Aggression,” I outlined hundreds of American bombings and invasions around the globe, from the conquest and slaughter of Native Americans to the drone strikes in Yemen, Pakistan, Somalia, and elsewhere today. It would do readers well to read that piece first to take in the scope of American war. We remember the American Revolution, the Civil War, the World Wars, Korea, Vietnam, Iraq, and the War on Terror. But do we recall our bloody wars in Guatemala, Haiti, Mexico, and the Philippines? Since its founding in 1776, 241 years ago, the United States has been at war for a combined 220 years, as chronicled by the Centre for Research on Globalization (CRG). 91% of our existence has been marked by violence.

How many of those conflicts protected the liberties of U.S. citizens? How many years did the military literally defend our freedoms?

Well, what precisely is it that poses a threat to our freedoms? We can likely all agree that what qualify as freedoms are 1) rights to actions and words that can be expressed without any retribution, guaranteed by law, and 2) the total avoidance of miseries like enslavement, imprisonment, or death. Thus, a real threat to freedom would require either A) an occupation or overthrow of our government, resulting in changes to or violations of established constitutional liberties, or B) invasions, bombings, kidnappings, and other forms of attacks. If you read the article mentioned above, it goes without saying the U.S. has much experience in assaults on the freedoms of foreign peoples. Much of our violence was the violence of empire, with the expressed and sole purpose of seizing natural resources and strengthening national power.

So what we really need to ask is how close has the U.S. come to being occupied or U.S. citizens attacked? How many times have either of these things occurred? We must answer these questions honestly. Should it be said fighting Native American or Mexican armies protected freedom? No, the only reason our nation exists is because Europeans invaded their lands. We will include no war of conquest, from our fight with Spain over Florida to our invasion of Hawaii. We killed millions of innocent people in Vietnam. Were they going to attack America or Americans? No, we didn’t want the Vietnamese to (democratically) choose a Communist government. Now, you can believe that justifies violence if you wish. But the Vietnam War had nothing to do with defending our freedoms or lives. Neither did our invasion of Cuba in 1898. Nor our occupation of the Dominican Republic starting in 1916. Nor our wars with Saddam’s hopelessly weak Iraq. Nor many others.

Using this criteria, my estimate to the titular question is that only four wars, representing 19 years, could reasonably meet Qualification 1 (some also meet the second qualification). These conflicts protected or expanded our liberties by law:

The American Revolution (1775-1783): While the Revolution was partly motivated by Britain’s moves to abolish slavery in its colonies, it did expand self-governance and lawful rights for white male property-holders.

The War of 1812 (1812-1815): While U.S. involvement in the War of 1812 had imperialist motives (expansion into Indian and Canadian territories) and economic motives (preserving trade with Europe), Britain was kidnapping American sailors and forcing them to serve on their ships (“impressment”). This war might have simply been included below, in Qualification 2, except for the fact that Britain captured Washington, D.C., and burned down the Capitol and the White House — the closest the U.S. has ever come to foreign rule.

The Civil War (1861-1865): Southern states, in their declarations of independence, explicitly cited preserving slavery as their motive. Four years later, slavery was abolished by law. Full citizenship, equal protection under the law, and voting rights for all men were promised, if not given.

World War II (1941-1945): The Second World War could also have simply been placed in Qualification 2 below. Beyond freeing Southeast Asia and Europe from the Axis, we would say the U.S. was protecting its civilians from another Pearl Harbor or from more German submarine attacks on trade and passenger ships in the Atlantic. Yet it is reasonable to suppose the Axis also posed a real threat to American independence, the only real threat since the War of 1812.

Had Germany defeated the Soviet Union and Britain (as it might have without U.S. intervention), establishing Nazi supremacy over Europe, it is likely its attention would have turned increasingly to the United States. Between the threat of invasion from east (Germany) and west (Japan), history could have gone quite differently.

German plans to bomb New York were concocted before the war; Hitler’s favorite architect described him as eager to one day see New York in flames. Before he came to power, Hitler saw the U.S. as a new German Empire’s most serious threat after the Soviet Union (Hillgruber, Germany and the Two World Wars). Some Japanese commanders wanted to occupy Hawaii after their attack, to threaten the U.S. mainland (Caravaggio, “‘Winning’ the Pacific War”). After Pearl Harbor, the U.S. did not declare war on Germany; it was the reverse. Japan occupied a few Alaskan islands, shelled the Oregon and California coasts, dropped fire balloons on the mainland, and planned to bomb San Diego with chemical weapons. Germany snuck terrorists into New York and Florida. The Nazis designed their A-9 and A-10 rockets to reach the U.S., under the “Amerika Bomber” initiative. Also designed were new long-range bombers, including one, the Silbervogel, that could strike the U.S. from space. Hitler once said, “I shall no longer be there to see it, but I rejoice on behalf of the German people at the idea that one day we will see England and Germany marching together against America.” While an Axis invasion of the United States is really only speculation, it has some merit considering their modus operandi, plus an actual chance at success, unlike other claims.

19 years out of 220 is just 8.6% (we’ll use war-time years rather than total years, erring on the side of freedom).

Qualification 2 is harder to quantify. U.S. civilians in danger from foreign forces is a far more common event than the U.S. Constitution or government actually being in danger from foreign forces. We want to include dangers to American civilians both at home and overseas, and include not just prolonged campaigns but individual incidents like rescue missions. This will greatly expand the documented time the military spends “protecting freedom,” but such time is difficult to add up. Many military rescue operations last mere weeks, days, or hours. The Centre for Research on Globalization’s list focuses on major conflicts. We’ll need one that goes into detail on small-scale, isolated conflicts. We’ll want to look not just at the metric of time, but also the total number of incidents.

But first, we will use the CRG list and its year-based metric to consider Qualification 2. The following wars were meant, in some sense, to protect the lives of U.S. citizens at home and abroad. They do not meet the first qualification. Conflicts listed in Qualification 1 will not be repeated here. Five wars, representing 36 years, meet Qualification 2:

The Quasi-War (1798-1800): When the United States refused to pay its debts to France after the French Revolution, France attacked American merchant ships in the Mediterranean and Caribbean.

The Barbary Wars (1801-1805, 1815): The United States battled the Barbary States of Tripoli and Algiers after pirates sponsored by these nations began attacking American merchant ships.

The Anti-Piracy Wars (1814-1825): The U.S. fought pirates in the West Indies, Caribbean, and Gulf of Mexico.

World War I (1917-1918): The Great War nearly found itself in Qualification 1. After all, Germany under Kaiser Wilhelm II made serious plans, in the 1890s, to invade the United States so it could colonize other parts of Central and South America. During World War I, Germany asked Mexico to be its ally against the U.S., promising to help it regain territory the U.S. stole 70 years earlier. However, invasion plans evaporated just a few years after 1900, and Mexico declined the offer. The Great War appears here for the American merchant and passenger ships sunk on their way to Europe by German submarines (not just the Lusitania).

The War on Terror (1998, 2001-2017): It is very difficult to include the War on Terror here because, as everyone from Osama bin Laden to U.S. intelligence attests, it’s U.S. violence in the Middle East and Africa that breeds anti-American terror attacks in the first place. Our invasions and bombings are not making us safer, but rather less safe by widening radicalism and hatred. However, though this predictably endless war is counterproductive to protecting American lives, it can be reasonably argued that that is one of its purposes (exploitation of natural resources aside) and that killing some terrorists can disrupt or stop attacks (even if this does more harm than good overall), so it must be included.

36 years out of 220 is 16.4%. Together, it could be reasonably argued that 25% of U.S. “war years” were spent either protecting our constitutional rights from foreign dismemberment or protecting citizen lives, or some combination of both.

But we can also look at the total number of conflicts this list presents: 106. Four wars out of 106 is 3.8%, another five is 4.7%. Let’s again err on the side of freedom and split the Barbary and Terror wars into their two phases, making seven wars for 6.6%. Adding 3.8% and 6.6% gives us 10.4% of conflicts protecting freedom.

Any such list is going to have problems. What does it include? What does it leave out? Does it describe the motivation or justification for violence? Does it do so accurately? Should recurring wars count as one or many? Does the list properly categorize events? This list labels U.S. forces violating Mexican territory to battle Native Americans and bandits as repeated “invasions of Mexico.” If Mexican forces did the same to the U.S., some of us would call it an invasion, others might rephrase. And couldn’t these incursions into a single nation be lumped together into a single conflict? Oppositely, the list lumps scores of U.S. invasions and occupations of most all Central and South American nations into a single conflict, the Banana Wars — something I take huge issue with. The solution to issues like these is to either create a superior list from scratch or bring other lists into the analysis.

Let’s look at “Instances of Use of United States Armed Forces Abroad,” a report by the Congressional Research Service (CRS). It is a bit different. First, it includes not just major conflicts but small, brief incidents as well, and it’s smarter about lumping conflicts together (no Banana Wars, no Anti-Piracy Wars, but the U.S. incursions into Mexico to fight Native Americans and bandits are listed as one conflict). Thus, 411 events are documented. Second, even this is too few, as the list begins at 1798 rather than 1776. Third, it does not include wars with Native Americans like the first list. This list is highly helpful because the CRS is an agency of the Library of Congress, conducting research and policy analysis for the House and Senate, and thus its justifications for military action closely reflect official government opinion.

We will apply the same standards to this list as to the last. We’ll include the nine conflicts we studied above if the timeframe allows, as well as any events that have to do with civilians, piracy, and counter-terrorism. We will thus modify 411 events in this way:

– 38 incidents/wars that involved “U.S. citizens,” “U.S. civilians,” “U.S. nationals,” “American nationals,” “American citizens,” etc.

– 9 incidents/wars related to “pirates” and “piracy” (does not include the rescue of U.S. citizen Jessica Buchanan, already counted above, nor Commodore Porter’s vicious 1824 revenge attack on the civilians of Fajardo, Puerto Rico, who were accused of harboring pirates)

– 6 official conflicts: the Quasi-War (“Undeclared Naval War with France”), two Barbary Wars, the War of 1812, and two World Wars (the Revolution does not appear on this list due to its timeframe; the Anti-Piracy Wars are included above, the War on Terror below)

+ 1 Civil War (it must be added, as it is not included on this list because it did not involve a foreign enemy)

– 27 incidents/wars related to combating “terrorism” or “terrorists”

That gives us 81 events that match Qualifications 1 and 2. 81 out of 412 is 19.7% — thus about one-fifth of military action since 1798 in some way relates to protecting Constitutional freedoms here at home or the right to life and safety for U.S. civilians around the globe. Of course, were we to only look at Qualification 1, we would have but three events — the War of 1812, the Civil War, and World War II — that preserved or expanded lawful rights, or 0.7% of our wars since 1798.

The CRS list does not break down some incidents into times shorter than years, and documenting those that are by days, weeks, or months is an enormous chore for a later day. Thus the estimation for time spent defending freedom will have to come from the CRG list: 25% of the time the military is active it is involved in at least one conflict that is protecting freedom. Also, just for some added information, there are 20 years on the CRS list where there is not a new or ongoing incident. That’s since 1798. This is almost identical to the 21 years of peace since 1776 in the CRG analysis. So of the 219 years since then, we’ve spent 91% of our time at war, the same as the CRG list since 1776 (or trimmed to 1798).

(A list created by a professor at Evergreen State College goes from 1890-2017 and has five years of peace. We’ve been at war 96% of the time since 1890. It lists 150 conflicts, with only 3 having to do with rescues or evacuations of Americans [2%], 11 having to do with the War on Terror in Arabia and Africa in 1998 and after 9/11 [7.3%], plus World War I [0.6%]. That’s 9.9% for Qualification 2. Throw in another 0.6% for World War II, and thus Qualification 1, and we have 10.5% of conflicts since 1890 protecting freedom. Because this list begins so late, however, we will not use it in our averaging. Doing so would require us to trim the other lists to 1890, cutting out the piracy era, the Revolution, the Civil War, etc.)

Averaging the percentages from the two lists relating to total conflicts gives us 2.3% for Qualification 1 and 15% for Qualification 2. 17.3% all together. Trimming the CRG list to begin at 1798 yields about the same result.

In sum, it could be reasonably asserted that the U.S. military protects our freedoms and lives in 17.3% of conflicts. (If we take out the War on Terror for its deadly counter-productivity, which I would prefer, that number drops to 10.8%, with 17% of war years spent defending American freedom.)

Even Better Than ‘Angels in the Outfield’

Remember the movie Angels in the Outfield? It’s the classic story of Roger, a foster kid who prays for God to help the Angels win the pennant so that his dad will come back. (Sounds like one truly twisted deal, but Roger’s dad wasn’t at all serious. If we’re being honest, Roger seems old enough to have known about figurative language.)

If your memory is as decrepit as the cheap VCR tape of this movie in the box in your basement, this image may help:

Screen Shot 2017-11-27 at 2.08.52 PM

Jesus, Roger looks uncomfortable in this picture. I don’t remember him being on the verge of tears in this scene. This looks like the beginning of an episode of Law and Order: SVU. CHUNG-CHUNG.

This is the scene in which Roger and his best buddy J. P. meet the indelibly cheerful Angels manager George Knox, who grows from skeptic to believer about the whole angels-playing-baseball thing (Roger is the only one that can see them). When Roger does see one, he’s like:


That’s where that hand motion comes from if you ever see people (me) doing this during a baseball game. The Royals once used the theme music to the movie when someone hit a home run, and I could never understand why I was the only one at Kauffman Stadium doing this while it played.

Also: That moment you realize Roger was played by Joseph Gordon-Levitt of 500 Days of SummerInception, and Dark Knight Rises.


Angels in the Outfield is truly the greatest baseball movie of all time (bite me, Kevin Costner), therefore I in no way compare the Kansas City Royals to it casually. But without question, in every arena the Royals’ story rivals and surpasses Roger’s. This is such big news, I’m surprised more media attention hasn’t been paid to it.




George Knox hates to lose. Can any clip better represent the boiling rage lurking beneath the skin of every Royals fan, just waiting to detonate, through all the miserable seasons of the past years, when Kansas City was the laughingstock of Major League Baseball?

A clip of a nuke wouldn’t suffice. It has to be George Knox marching through a locker room of two dozen half-naked losers and absolutely destroying their fruit and meat platters. That is the pain Royals fans felt after every season–no, every game–before the Royals’ meteoric rise.

And this is Knox after becoming manager rather recently. Multiply this rage by 29 years, and you’ll understand Kansas City’s agony. There’s no comparison.

Even this bloody movie made us look like total twits. Why does this guy not slide? What is he doing?




Roger’s story is fictional, with fictional managers, ballplayers, and angels. At least, I hope angels don’t look like this:


Honestly, this angel looks like either the uncle you pray to God won’t sit next to you at Thanksgiving or the aunt that’s visibly ready to call your favorite music the work of Satan before you even tell her what it is. Not really sure which one at this point.

But the Royals’ story?

This isn’t a movie. And no players appear to defy physics as an angel lifts them into the air. It’s simply incredible baseball. It’s real life. That’s an important reason the Royals’ story is better.


Consider last year: Riding Jeremy Guthrie’s 7-inning shutout to beat the White Sox 3-1 on September 26, clinching their first playoff berth in 29 years. Four days later, staging a roaring comeback against the Oakland A’s in the do-or-die American League wild card game, down 3-7 but leveling the game in the 9th inning, eventually winning 9-8 in the 12th, after nearly 5 hours of play.

Sweeping both the American League division and championship series, earning the most consecutive wins in MLB postseason history. Making it to Game 7 of the World Series against the San Francisco Giants, but experiencing the most painful of defeats.

And this year: Winning their first American League Central title since 1985 on September 24 against the Mariners. On the brink of elimination in Game 4 of the AL division series against the Astros, down 4 runs in the 7th, and smashing in 5 runs in the 8th inning and piled on more in the 9th to win the game 9-6. They won the series in the next game.

Winning Game 6 of the AL championship series versus the Blue Jays by Lorenzo Cain scoring from first base on Eric Hosmer’s single, with closer Wade Davis shutting down the Blue Jay’s comeback threat, a runner on first and third.

And last night, Game 1 of the 2015 World Series, verses the New York Mets. Alcides Escobar’s inside-the-park homer, the first in the World Series since 1929, the year the Great Depression began. Winning 5-4 after 14 innings, the longest game in World Series history.

Could all this possibly be topped by the story of guys who only made it to the postseason with divine intervention in sparkling pajamas?


No. They’re cheaters.

Also, that’s Matthew McConaughey being picked up there. Swear to God. As he later said from the driver’s seat of a Lincoln, “Sometimes you’ve got to go back…”

Adrien Brody is also a ballplayer in this movie. McConaughey, Gordon-Levitt, Brody, Danny Glover, Tony Danza, Christopher Lloyd…seriously, is there anyone this film doesn’t have?



It has actor and concept art model for Mr. Incredible Jay O. Sanders. He plays Ranch Wilder.

Roger and George Knox had to deal with Ranch Wilder, the “voice of the Angels,” who makes it clear throughout the film he very much wants the Angels to lose. He hates George Knox, and is constantly being a Debbie Downer about the Angel’s postseason prospects.


Royals fans get Joe Buck.


Buck took a lot of heat during the 2014 World Series for what Royals fans perceived to be bias, in support of the Giants…and one pitcher in particular.

Ranch Wilder got fired. Buck is still going strong, back to call this 2015 World Series.

This just makes a better story. No one really seemed to mind Ranch Wilder’s Angel-bashing in the film. He was only fired because he left his mic on when he really went berserk.

But Kansas City’s story has more conflict, more passion and intrigue. Buck is back, and a lot of KC fans are enraged, enough to start petitions and even call the games themselves.




Remember this guy? He’s that one fan in the crowd the movie focuses on, and likely the only human who has ever needed to professionally wax the sides of his neck.

He thinks Roger is crazy for seeing angels, he accidentally sits on Christopher Lloyd’s angel character, takes a baseball in the mouth, and at one point screams, “Hemmerling for Mitchell?! Go back to Cincinattiiiiiiii!” Classic quote.

Why is he always on screen? Why does he get so much attention? Why is that so obnoxious? In a way, he’s kind of the movie’s version of…of…


Marlins Man.

This mysterious and no doubt totally loaded figure has been spotted behind home plate throughout this postseason and the one in 2014, and works his way to other sports championships as well.

Always on screen, he is the one fan that gets any attention. He gets national attention! Yes, he donates a ton of money to charities, but what of the other 37,000 people in the stands? What about their stories? He leaves them in the dust.

It’s all an intentional thing. He picks his seat so he can be on camera. He loves to rep his completely irrelevant team, which has hopefully fired its graphic design staff by now.

Because he’s desperate as a toddler for attention, I think he successfully one-ups the blowhard from Angels in the Outfield. And anyone who disagrees with me is, to quote J. P., a “Nacho Butt.”



As mentioned, Roger is a foster kid. About two-thirds into the movie, his deadbeat dad–the same one who said if the Angels won the pennant he and Roger could “be a family again”–abandons Roger for good.

“Sorry, boy,” Dad of the Year says as Roger rushes up to him, excited to tell him about how well the Angels are doing. Dad pats Roger on the cheek and walks away, leaving Roger to try to croak out “Where are you going?” before he begins to weep.

Screen Shot 2017-11-27 at 2.33.20 PM.png

If you’re a kid from a stable home watching this movie, it truly influences you, seeing someone your own age abandoned by his father. Not to mention Roger’s mother died, as did J. P.’s dad. Their stories are fictional, yet you know in the back of your mind while watching that millions of children experience abandonment, foster care, homelessness, or have parents deceased or in jail. The movie, unlike the vast majority of children’s films, makes you think about the suffering of others and how to persevere through pain.

And if a fictional story about this is powerful, how much more so is real life?

Sadly, three Royals lost a parent this season.

Mike Moustakas lost his mother Connie on August 9, while Chris Young lost his father Charles on September 26. As reported by The Kansas City Star, Young pitched the next day to honor his dad, and went 5 innings without giving up a hit.

Edinson Volquez pitched last night, in Game 1 of the World Series. His father Daniel died just before the game, and Volquez’s family requested that Royals manager Ned Yost not tell Volquez until after he pitched.


In other words, the world knew of Volquez’s father’s death before Volquez.

Through all this, the Royals have persevered. Moustakas said after the game, “For all the stuff that’s happened this year, to all of our parents…it has to bring you closer together.”

Eric Hosmer said, “It’s just another angel above, just watching us and behind us through this whole run.”



The Angels in the movie won the pennant (we’re kind of left to wonder about the World Series). Roger and his best friend J. P. get adopted by George Knox and live happily ever after.

I don’t know if Ned Yost will adopt any players, nor if the Royals will finally, after 3 decades, win it all. But there is one thing I know to be true, that applies to touching movies and real life alike:

“It could happen.”

ANGELS IN THE OUTFIELD, Milton Davis Jr., Danny Glover, Joseph Gordon-Levitt, 1994, (c)Buena Vista P