• The Moral Landscape Challenge

    book cover
    If I was a praying man, I’d be interceding with the gods on behalf of Russell Blackford, wishing him boundless reserves of energy and patience beyond the usual lot of mortal men. As you may recall, he has agreed to serve as judge in Sam Harris’s essay contest dubbed The Moral Landscape Challenge, and the completed essays should be rolling in around now. The goal was to come up with an essay which would persuade Harris to change his mind on the core thesis of his book:

    Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

    While I’m greatly looking forward to reading the winning essay, it feels to me like the dice are fairly heavily loaded from the get-go. Assuming that no single essay could persuade someone like Harris (or Blackford, or me!) to turn to a supernatural view of the universe, and assuming that everything in a natural universe falls within the purview of science (at least in principle) I’m not seeing a load of wiggle room here. Possibly, one could try to show that there can be no generally right and wrong answers on certain moral questions, perhaps because the idea of personal well-being and the radius of moral concern will vary with the perspective and values of each culture or individual moral agent.

    Now that we find ourselves in the temporal gap between the submission deadline and the publication of the winning essay, I’d be quite interested in hearing how you guys would have a go at Harris’s concept of naturalized ethics. For a start, what sort of (naturalistic) propositions would need to be true in order to make Harris’s central thesis false?

    Category: EthicsPhilosophy

    Article by: Damion Reinhardt

    Former fundie finds freethought fairly fab.
    • jjramsey

      assuming that everything in a natural universe falls within the purview of science (at least in principle)

      Unless you are using “science” to mean something that it usually doesn’t, that’s a terrible assumption. Typically, “science” means the use of rigorous empirical study to find out how the world works, generally with some attempt to sidestep or compensate for human biases. While it works in conjunction with math, logic, and sometimes philosophy, it is distinct from them.

      Anyway, here’s what has been the criticism of Harris in a nutshell. Science deals with what is. Morality deals with what ought to be. Since at least David Hume’s day, it’s been recognized that getting an “ought” from an “is” is a difficult and possibly intractable problem. There is nothing about this issue that requires the supernatural or the religious.

      (Indeed, one can argue that much of religious morality runs afoul of the is-ought divide. For example, suppose one were to grant that there is a God who wants humans to abide by certain rules. Why should that be enough to grant that it follows that we ought to follow those rules?)

      • I don’t suppose you have access to the 2011 edition? In the afterword, he addresses this criticism starting on page 200, calling it “The Value Problem.”

        This link probably won’t work but it’s worth a shot — http://is.gd/KrVn5G

      • jjramsey

        There’s a reason that I chose the phrasing, “what has been the criticism of Harris in a nutshell.” I was summarizing others’ takes rather than giving my own, since I haven’t read the book itself (though I have read or watched much of what Harris has put online).

        That said, what you showed me from the Google preview doesn’t make Harris look good. When Blackford says that Harris has been no more successful than anyone else at getting an “ought” from an “is,” Harris’ response seems to be, “Well, I meant to do that.” It looks like Harris has conceded that he is not really offering an explanation of how science can determine moral values (contrary to his own subtitle), but rather a discussion of how science can help us achieve well-being once we’ve agreed to value it.

      • I would say that the subtitle is definitely misleading. The book is not about how to determine fundamental values, but rather how to determine secondary values given that we agree upon the idea maximizing well-being for conscious creatures.

    • Doug Mann

      It’s important to realize that Harris doesn’t just want to naturalize ethics; he’s trying to achieve it in service of textbook normative utilitarianism, and he starts to follow this utopian agenda to its logical conclusions in TML, with some creepy social engineering implications. Consider this quote from a 2006 book by philosopher Richard Joyce (published four years prior to TML’s publication):

      “The utilitarian—a prime example of a moral naturalist—holds that moral value just is happiness (or whatever kind of utility one might want to plug in instead of happiness), and that facts about what we are morally obligated (ought) to do just are facts about what promotes the maximum amount of happiness. Since facts about what produces happiness are causal and psychological facts—the kind of thing that can be investigated using
      scientific methods—then so too, according to this view, are moral values and
      obligations.”

      Substitute “the well-being of conscious creatures” for “happiness” in this paragraph and you have a concise summary of TML’s framework. However, Harris’ philosophical argument for “maximizing the well-being of conscious creatures” as the only and obligatory goal of morality is poorly organized and unpersuasive, with large gaps between the steps, which I had to reconstruct in order to analyze and critique them in my TML Challenge essay. While it’s true that moral decisions affect the well-being of people, this truism doesn’t get Harris very far in service of his ambitious utilitarian agenda.

      • Substitute “the well-being of conscious creatures” for “happiness” in this paragraph and you have a concise summary of TML’s framework.

        Agreed.

        However, Harris’ philosophical argument for “maximizing the well-being of conscious creatures” as the only and obligatory goal of morality is poorly organized and unpersuasive…

        I’m not sure if “obligatory” even plays a part in the Harris’ framework, but then I haven’t read the book in a couple of years.

        As to whether there are other moral concerns beyond maximizing the well-being of conscious creatures, if you could show that even one or two of these exist, that alone would utterly demolish the fundamental premise of the book.

      • Beaker

        Two values that I think might qualify:
        Self-determination: the right to decide what you do with your life, even if that might not lead to the optimal decisions for well-being, or perhaps even if that might actively diminish your well-being. Should we in these cases always prevent these choices? To what extend should we influence or even force people toward certain behaviors, if the end result is a higher well-being. For example, the claim that slaves were actually happier as slaves. If this claim were to be true, should we then have given up on the abolition of slavery. This is an extreme claim, but it crops up regularly in discussions on health-care. Should we impose a “fat-tax” to dissuade people from buying unhealthy foods that lead to obesity and from obesity to decreased well-being?

        Valuing “truth” over well-being: It is not necessarily the case that a greater knowledge will also lead to a greater well-being. Ignorance might indeed be bliss. Should we sacrifice knowledge if the increase in knowledge results in a decrease of well-being? For example, one of the claims often directed at atheists by theists is the claim that theism makes people feel better, and therefore is good. One of the responses generally given by atheists is that whether theism makes people happy or not is not an argument for its truth. And I personally would argue that even if the theist is correct in his claim that theism improves his well-being, it would still be better if he or she would accept the evidence for what it is, namely that God doesn’t exist.

        Note that in both examples, I don’t think the claims actually hold up. But it might theoretically have been the case that they did. I would not be entirely surprised if researchers on well-being would find circumstances where they hold true.

        Another issue I would have, but that might be covered in the book, is how you would define optimizing well-being. In the extreme, I could think of cases where the well-being of a large number of people could be established by sacrificing the well-being of a smaller number of people, leading to a net increase in well-being. People’s needs can often be competing. Would you then sacrifice those people? Or is there a level of sacrificing well-being that is deemed unacceptable and a level that is still deemed acceptable. It would seem to me that where you cannot scientifically determine where to draw that line.

        Let me take two examples from health care for a minute here. We generally think that experimentation on humans is only acceptable if we adhere to strong ethical guidelines. This leads to some sub-optimal research designs. However, we still allow for some potentially harmful, potentially even fatal, research on humans provided the risks are low and the people agreeing to the experiment are fully informed about the treatment they receive and the risks inherent in it. But how strong these risks may be is an arbitrary decision, not based in any science. Similarly for health risk of carcinogenic substances. Since there is no threshold effect for these substances below which we expect no effect, whether the dose of a certain carcinogenic substance is acceptable to use in the general population is determined based on whether the risk is lower than a 1 in a million chance of a new cancer case. For industrial use, these risks are allowed to be higher. Why 1 in a million and not 1 in a billion or 1 in a thousand. No good scientific reason, that number is (at least somewhat) arbitrary. I would think something similar could apply to well-being. I haven’t read Harris’ book, I would be interested in how he handles this.

      • It’s going to be tricky to decouple self-determination from well-being because (1) people do seem more content and fulfilled when freed from coercive constraints and (2) people tend to produce more prosperity in free markets. On both points we can look to North and South Korea, or East and West Germany, or similar historical examples of a single culture divided by ideology.

      • As to the non-instrumental value of truth, I think that is a more promising line of reasoning. What we need is a clear-cut example (at least a hypothetical example) where we would choose to more truth over more well-being.

        One of the responses generally given by atheists is that whether theism makes people happy or not is not an argument for its truth.

        I would not try to persuade someone that the universe is a vast, cold, uncaring place if I honestly believed that theism was the only thing keeping them from killing themselves or others. In a clear trade-off between well-being and truth, I would choose well-being.

      • Beaker

        I would definitely agree that self-determination will correlate with well-being. I think the same will hold true for knowledge, by the way. It would seem to me that as a general rule, a more realistic picture of the universe will lead to more well-being. However, general rules are not universal truths.

        I would definitely agree that in a clear trade-off of extreme proportions, we would value well-being over truth. But what about the less clear trade-offs? Should a physicians tell the patient that he has only three months left to live, even if that means the well-being of the patient will diminish because of that? Medical ethics prescribes that the physician should. If it were scientifically shown that a liberal, theistic society with a secular government has a greater, overall well-being compared to an atheistic society with a secular government, should we stop arguing in favor of atheism? I don’t think we should. Should the government ban smoking entirely (including selling smoking products). It might definitely increase general well-being, cancer is not a fun experience. But how far should we edge into the personal choice of people to promote their well-being?

        I think that if we look at the extreme cases, the answers to these are pretty clear. I also think that in the extreme cases, well-being will often correlate with free choice and knowledge. However, it seems to me that in the more mundane cases this is not always true. I think looking at many of the debates on health care and health prevention, this becomes apparent. And that determining how far we want to go in striking the balance is a matter of personal taste, not one of science.

        Which makes me think of another value that is often debated in the public sphere. This one concerns the handling of animals and environmental protection. That is how far we should go in promoting the well-being of conscious organisms (i.e. humans) when sacrificing the well-being of non-conscious organisms (i.e. every other living creature on the planet) to do so. For example, animal rights activists would hold that we should not perform any animal research, even if that means sacrificing potential treatments for serious diseases. I think animal rights activists that think this are nuts. But I would hold that there is no specific scientific reason to think we should value the well-being of humans more, compared to the well-being of the test animals we routinely torture to arrive at a treatment. On environmental issues, how far should we go to protect the habitat of the European hamster from extinction, even if relatively few people ever heard of the damn rodent and few would miss it if it is gone? I think the continued existence of this creature needs to be protected, even if it would not lead to a maximization of our own well-being. I don’t have a good scientific reason for thinking this.

      • Should a physicians tell the patient that he has only three months left to live, even if that means the well-being of the patient will diminish because of that? Medical ethics prescribes that the physician should.

        Medical ethics are generally rule-based guidelines rather than narrowly situational. The strong preference for disclosure is not something that we can apply only to one patient with three months to live, it applies almost entirely across the board, because it is believed that mentally unimpaired adult patients will make better decisions if they know what exactly they are facing.

        There is also the fact that the patient is not generally the only conscious creature of interest here, they will want to “get their affairs in order” so as to minimize the stress on their loved ones when they are gone. Probate proceedings, in the absence of a clear will and testament, can wreak havoc on the family of the decedent.

      • If it were scientifically shown that a liberal, theistic society with a secular government has a greater, overall well-being compared to an atheistic society with a secular government, should we stop arguing in favor of atheism?

        Good God, yes!

        Contemporary history tells us that societies with high levels of stability and prosperity and modernity tend towards higher levels of organic unbelief, which does not lead to anarchic chaos as the fundamentalists believe.

        Hypothetically, though, if high levels of uncoerced unbelief inevitably lead to something like Maoism or Stalinism or Kimism, then yes, I would hold my tongue. If it could be shown only that theism is generally inversely proportional to violent crime rates, that alone would give me pause.

      • Beaker

        So how far would you go in stopping atheists from arguing their beliefs in that case? Would the persecution of a small number of vocal atheists be justified to prevent a possible Maoism from arising? Would that justify the inevitable arisal of a “knowledgable” class that acts as a gateway to knowledge for the rest of the population? How much well-being of individuals would you sacrifice for the good of the overall well-being of the whole society (to tie this back to my earlier question on how Harris wants to mention this possible trade-off of individual well-being versus the well-being of society as a whole scientifically).

        And what if, hypothetically, high levels of uncoerced unbelief inevitably lead to small decreases in well-being. Surveys of the number of people reporting to have been unhappy in the past year showing an increase of 20% in the number of people who reported having depressed feelings during the past year. Incidence of depression increasing with 50% from a lifetime prevalence of 17% to 24 percent. Would that justify stopping arguing for atheism? To me, it would not.

        Again, as I stated in my previous reply, I think most of the extreme cases like you mention in your reply are pretty clear cut (although those cases inevitably leads you to have to scientifically justify various trade-offs of well-being). I would argue that there is a balancing act between competing values, as well as a balancing act for the effects of individuals versus society, that you cannot solve scientifically.

      • So how far would you go in stopping atheists from arguing their beliefs in that case?

        The first question was whether we should stop making an argument for something true if we know that said truth will diminish overall well being. The second question was whether we should use force to stop others from making the argument.