• Philosophical Zombies Are Coming To Eat Ur Brainz!!!

    There are two kinds of experiences human beings have: we experience facts about the world that we can relate to other people and they can verify for themselves (there is a supermarket down the street, American Idol comes on TV tonight at seven o’clock, the sky is blue) and we experience facts about what it is like to experience these things firsthand: what it is like to experience the color blue, what strawberries taste like, etc. Have you ever wondered whether there might be a person who doesn’t have any inner experiences of what things are like, but acts just like a normal person? This hypothetical person is what philosophers call a “philosophical zombie.” If you hold his hand to a fire, and he might yell and quickly retract his hand, but he won’t have that private sensation of pain that always accompanies such an event when it happens to you. Likewise, if you ask a zombie what the color of the sky is, he’ll tell you it is blue, but he when he looks at the sky he won’t have that unique and ineffable sensation of seeing blue like you have.

    Is the philosophical zombie suggestion both possible and meaningful? Dan Dennett doesn’t think so:

    Do you know what a zagnet is? It is something that behaves exactly like a magnet, is chemically and physically indistinguishable from a magnet, but is not really a magnet! (Magnets have a hidden essence, I guess, that zagnets lack.) Did you know that physical scientists adopt methods that do not permit them to distinguish magnets from zagnets? Are you shocked? Do you know what a zombie is? A zombie is somebody (or better, something) that behaves exactly like a normal conscious human being, and is neuroscientifically indistinguishable from a human being, but is not conscious. I don’t know anyone who thinks zagnets are even “possible in principle” and I submit that cognitive science ought to be conducted on the assumption that zombies are just as empty a fiction.

    In Swiss Family Robinson, the shipwrecked family finds a new home on a deserted island. The mother, distraught at the prospect of the family spending the rest of their lives there, is saddened even more at the prospect of her sons never marrying. Later on, the family discovers a girl who had also been stranded on the island. “See!” the father says to his wife, “The island will produce anything we need, even a girl!” This has caused to me to think about how we know that a thing like an island is conscious or not. Suppose that we were to consider the hypothesis that the island was like a conscious person and cared about the family. In such a case, we might the family to receive things like a girl, food, etc. We might even find the hypothesis believable if we were to observe a consistent pattern of requests being made by the family and later being fulfilled on the island. The hypothesis would gather strength if the fulfilled requests were for things a deserted island would be highly unlikely to have. Though one or two such fulfilled requests could be chalked up to chance, a very large number of them couldn’t be, and at that point we might come to believe the island was conscious. However, is it still conceivable, or imaginable, that the island wasn’t really conscious and that all the fulfilled requests happened by a long string of coincidences? The “behaviors” of the island (the fulfillment of wishes that took place) are signs of, are evidence for, its consciousness. The behaviors are not identical to its consciousness.

    Are philosophical zombies possible then? Let’s try another thought experiment. Could you possibly become a partial philosophical zombie? That is, could your brain process visual information without you experiencing sight? As one author put it:

    “You wake up one morning, open your eyes, and what do you notice first? That the sun is streaming in the window, that your alarm clock says 7:30, and that your partner is already getting dressed on the other side of the room — or that, despite registering all this in a moment, you can’t actually see anything?”

    Here we have a creeping suspicion that the brain’s information processing would entail subjective experience. At first glance, philosophical zombies seemed possible, so we gave them the benefit of the doubt, but now the shoe is on the other foot; it seems like they aren’t possible after all.

    What are the characteristics of your subjective experience? Your subjective experience often if not always consists of interpreting and reacting to information that you have taken in from the world around you via your sense organs. Example: Your experience of tasting chocolate allows you to distinguish it from vanilla, strawberry, and all other flavors. Example: Objects that reflect light of a certain frequency look “red” and objects that reflect a different light frequency are another color.

    New thought experiment: suppose that someone in the future builds a super-complicated artificial intelligence, capable of functioning much like a human, which had a number of sensors to detect light, vibrations, etc. that gave its artificial mind senses of sight, touch, hearing, and so on.  The ability to drink up information from the outside world, process it, interpret it, react to it and to make distinctions between different things out there in the world sounds like just what happens when we are conscious (see above). The more you think about a robot like this, the harder it becomes to imagine it not being conscious. Think about the robot’s internal renderings of the outside world. It’s internal renderings would be as impossible for the robot to communicate to others as our immediate experiences are (you can’t really tell someone else what a red thing is like if they’ve never seen red). The robot also might perceive such internal renderings as not being material. As the robot began to take in external information and represent it, the chips on its circuit board might not represent the fact that the whole process of information uptake/interpretation/response is itself purely material in nature, and so the robot would not perceive what is happening as being material, even though it is.

    That last point is important, because many people have described their consciousness as being “immaterial” and have perceived it as impossible to explain in materialistic terms. The confusion, I suspect, comes from the fact that when someone experiences the color red, they aren’t aware of the material processes involved in such an experience, which causes them to label it immaterial. If you imagine a brain processing information about the color red, and you compare it to your own first-person experience of the color red, there is a deep and startling disconnect between the two pictures you have in your head. I think this is what has caused a number of philosophers to believe there is a real distinction, and that the mind is not the same thing as the brain. How could the brain processing such information cause, or be the same thing as, the first-person subjective experience you know so well? The brain does not record information about its own workings (you know what the color red looks like, but you cannot, at least not through introspection, know what neurons are firing or what they are doing when you look at the color red). Therefore, it does not perceive its own perceptions and workings as being mechanistic processes. When you’re having the first person experience of subjective consciousness, you’re not aware of what a third party would observe in looking at your brain, and vice-versa: when you’re watching scans of someone else’s brain, it might not occur to you that your experience of it isn’t the same as the perceptions or information processing it is currently having. Hence the conceptual disconnect that led to mind-body dualism.

    Hopefully all of this hasn’t been a mess of incoherent rambling nonsense. The point that I wanted to get to is this: although it is possible to have behaviors that are associated and caused by conscious awareness (as with the island example) it is not possible to have a functioning brain (or any equivalent information-processing device) without having consciousness.

     

    Category: Uncategorized

    Tags:

    Article by: Nicholas Covington

    I used to blog at Answers in Genesis BUSTED! I took the creationist organization Answers in Genesis to pieces. I am the author of Atheism and Naturalism and Extraordinary Claims, Extraordinary Evidence, and the Resurrection of Jesus. I am an armchair philosopher with interests in Ethics, Epistemology (that's philosophy of knowledge), Philosophy of Religion, and Skepticism in general.

    6 comments

    1. “it is not possible to have a functioning brain…without having consciousness”

      What about the common fruit fly? Its brain is really small, and mostly devoted to vision. Is it conscious?

      I think there’s something more to it. We know functioning brains can be unconscious, since we do it almost every night. But things like pulse and breathing continue on, and there’s some awareness, to trigger waking. But not much.

      I think consciousness is a self-awareness. A hypothetical deaf, blind, numb, locked-in person could still be conscious a lot of the time. So I don’t know if we can tie consciousness solely to awareness of the external world.

    2. There is, of course a Chombie.

      A Chombie looks and acts like a philosopher, gives lectures, writes papers, walks the walk, talks the talk. But unlike a real philosopher the Chombie invents a nonsensical entity like the philosophical Zombie under the guise of a thought experiment that actually fails to show what the Chombie claims it shows. The Chombie fails to see that defining out of the Zombie the attribute of subjective experience already presumes that subjective experience is distinct from and separable from the physical objective process of experiencing.

      Example: David Chalmers, after whom I coin the term Chombie.

    3. “That is, could your brain process visual information without you experiencing sight?”

      Yes. Blind sight. But this nothing to do with being Zombie but is an example of physical changes that alter subjective experience. More evidence that our subjective experience is an outcome of the physical. There is nothing in blind sight that implies there is something other than the physical that is altered.

      Great explanation, though I disagree with “it is not possible to have a functioning brain (or any equivalent information-processing device) without having consciousness”. This is only the case if you define (insits) a functioning brain must be conscious. Many animals that we might not think of as being conscious (specifically non-self-aware) have functioning brains.

      The way I look at subjective experience, which of course is a subjective view, is as follows.

      Any animal brain senses its environment, and also sense its own body and acts as a control system that allows the animal to survive in its environment. Part of that process is prediction – the automatic prediction of the path a prey animal is taking as it is pursued, for example. This involves feedback systems that continuously monitor and react to the environment, and the animal’s own body.

      From there a ‘more advanced’ brain can also start to include itself as part of the environment – it can monitor its own processes, to a limited extent. This is rudimentary self-awareness. It aids survival because it allows the animal to not only improve its basic motor responses to the environment but also helps it to improve its own mental processing of those responses.

      Below this conscious level there is still subjective experience of the brain body system doing its basic stuff – still a subjective perspective, but not conscious, or not self-aware, not the ‘higher level’ conscious subjective experience that this post addresses. The autonomic nervous system, or any non-biological feedback system, could be said to have this low level subjective experience by virtue of the feedback mechanism – the sensing of one’s own state.

      When an animal acquires a conscious self-aware subjective experience, in addition to the non-aware non-conscious subjective experience, then ‘this’ is what it feels like (i.e. how we feel about our subjective experience).

      So, when wondering what it feels like to be a bat we can only guess that a bat might or might not have sufficient capability to actually feel what it feels like to be a bat. What we can say is that our high level subjective experience is what it feels like to have higher level subjective experiences. Neither us or bats can say what it feels like to have lower level subjective experiences because there is no mechanism available to report those experiences to the self, to the higher level, in our case, and possibly no self to which such experiences could be reported if there were such a mechanism for a bat. Some lower level subjective experiences become higer level subjective experiences by simply delivering messages into the parts of the brain that are aware – e.g. for pain.

      It could be argued that this is a ‘just so’ story. But I would responed that at least it’s based on stuff we know about the brain, and does not rely on fictions for which there is no evidence at all, as is the case for the ‘just so’ stories from philosophers like Chalmers.

    4. @randy
      “What about the common fruit fly? Its brain is really small, and mostly devoted to vision. Is it conscious?”

      Maybe not. When I said what I said I had in mind cognitively normal humans. Some animals brains may not be conscious, or may only have some highly rudimentary form of it.

      @Ron
      “‘That is, could your brain process visual information without you experiencing sight?’

      “Yes. Blind sight.”

      I’m not so sure. I think it might be the case that some part of the brain is rendering the visual information, and that a sort of summary of that information (a conclusion, if you will) has leaked into the official conscious narrative.

      1. That’s a good one. I’ve been reading Luke for a long time (he used to blog at commonsenseatheism.com). Personally, I don’t think its wrong to use thought experiments and such, but we have to be aware that our intuitions are very fallible, especially when it comes to neuroscience and other areas of research that leave our daily experience.

    Leave a Reply

    Your email address will not be published. Required fields are marked *