• Philosophical Zombies Are Coming To Eat Ur Brainz!!!

    There are two kinds of experiences human beings have: we experience facts about the world that we can relate to other people and they can verify for themselves (there is a supermarket down the street, American Idol comes on TV tonight at seven o’clock, the sky is blue) and we experience facts about what it is like to experience these things firsthand: what it is like to experience the color blue, what strawberries taste like, etc. Have you ever wondered whether there might be a person who doesn’t have any inner experiences of what things are like, but acts just like a normal person? This hypothetical person is what philosophers call a “philosophical zombie.” If you hold his hand to a fire, and he might yell and quickly retract his hand, but he won’t have that private sensation of pain that always accompanies such an event when it happens to you. Likewise, if you ask a zombie what the color of the sky is, he’ll tell you it is blue, but he when he looks at the sky he won’t have that unique and ineffable sensation of seeing blue like you have.

    Is the philosophical zombie suggestion both possible and meaningful? Dan Dennett doesn’t think so:

    Do you know what a zagnet is? It is something that behaves exactly like a magnet, is chemically and physically indistinguishable from a magnet, but is not really a magnet! (Magnets have a hidden essence, I guess, that zagnets lack.) Did you know that physical scientists adopt methods that do not permit them to distinguish magnets from zagnets? Are you shocked? Do you know what a zombie is? A zombie is somebody (or better, something) that behaves exactly like a normal conscious human being, and is neuroscientifically indistinguishable from a human being, but is not conscious. I don’t know anyone who thinks zagnets are even “possible in principle” and I submit that cognitive science ought to be conducted on the assumption that zombies are just as empty a fiction.

    In Swiss Family Robinson, the shipwrecked family finds a new home on a deserted island. The mother, distraught at the prospect of the family spending the rest of their lives there, is saddened even more at the prospect of her sons never marrying. Later on, the family discovers a girl who had also been stranded on the island. “See!” the father says to his wife, “The island will produce anything we need, even a girl!” This has caused to me to think about how we know that a thing like an island is conscious or not. Suppose that we were to consider the hypothesis that the island was like a conscious person and cared about the family. In such a case, we might the family to receive things like a girl, food, etc. We might even find the hypothesis believable if we were to observe a consistent pattern of requests being made by the family and later being fulfilled on the island. The hypothesis would gather strength if the fulfilled requests were for things a deserted island would be highly unlikely to have. Though one or two such fulfilled requests could be chalked up to chance, a very large number of them couldn’t be, and at that point we might come to believe the island was conscious. However, is it still conceivable, or imaginable, that the island wasn’t really conscious and that all the fulfilled requests happened by a long string of coincidences? The “behaviors” of the island (the fulfillment of wishes that took place) are signs of, are evidence for, its consciousness. The behaviors are not identical to its consciousness.

    Are philosophical zombies possible then? Let’s try another thought experiment. Could you possibly become a partial philosophical zombie? That is, could your brain process visual information without you experiencing sight? As one author put it:

    “You wake up one morning, open your eyes, and what do you notice first? That the sun is streaming in the window, that your alarm clock says 7:30, and that your partner is already getting dressed on the other side of the room — or that, despite registering all this in a moment, you can’t actually see anything?”

    Here we have a creeping suspicion that the brain’s information processing would entail subjective experience. At first glance, philosophical zombies seemed possible, so we gave them the benefit of the doubt, but now the shoe is on the other foot; it seems like they aren’t possible after all.

    What are the characteristics of your subjective experience? Your subjective experience often if not always consists of interpreting and reacting to information that you have taken in from the world around you via your sense organs. Example: Your experience of tasting chocolate allows you to distinguish it from vanilla, strawberry, and all other flavors. Example: Objects that reflect light of a certain frequency look “red” and objects that reflect a different light frequency are another color.

    New thought experiment: suppose that someone in the future builds a super-complicated artificial intelligence, capable of functioning much like a human, which had a number of sensors to detect light, vibrations, etc. that gave its artificial mind senses of sight, touch, hearing, and so on.  The ability to drink up information from the outside world, process it, interpret it, react to it and to make distinctions between different things out there in the world sounds like just what happens when we are conscious (see above). The more you think about a robot like this, the harder it becomes to imagine it not being conscious. Think about the robot’s internal renderings of the outside world. It’s internal renderings would be as impossible for the robot to communicate to others as our immediate experiences are (you can’t really tell someone else what a red thing is like if they’ve never seen red). The robot also might perceive such internal renderings as not being material. As the robot began to take in external information and represent it, the chips on its circuit board might not represent the fact that the whole process of information uptake/interpretation/response is itself purely material in nature, and so the robot would not perceive what is happening as being material, even though it is.

    That last point is important, because many people have described their consciousness as being “immaterial” and have perceived it as impossible to explain in materialistic terms. The confusion, I suspect, comes from the fact that when someone experiences the color red, they aren’t aware of the material processes involved in such an experience, which causes them to label it immaterial. If you imagine a brain processing information about the color red, and you compare it to your own first-person experience of the color red, there is a deep and startling disconnect between the two pictures you have in your head. I think this is what has caused a number of philosophers to believe there is a real distinction, and that the mind is not the same thing as the brain. How could the brain processing such information cause, or be the same thing as, the first-person subjective experience you know so well? The brain does not record information about its own workings (you know what the color red looks like, but you cannot, at least not through introspection, know what neurons are firing or what they are doing when you look at the color red). Therefore, it does not perceive its own perceptions and workings as being mechanistic processes. When you’re having the first person experience of subjective consciousness, you’re not aware of what a third party would observe in looking at your brain, and vice-versa: when you’re watching scans of someone else’s brain, it might not occur to you that your experience of it isn’t the same as the perceptions or information processing it is currently having. Hence the conceptual disconnect that led to mind-body dualism.

    Hopefully all of this hasn’t been a mess of incoherent rambling nonsense. The point that I wanted to get to is this: although it is possible to have behaviors that are associated and caused by conscious awareness (as with the island example) it is not possible to have a functioning brain (or any equivalent information-processing device) without having consciousness.

     

    Category: Uncategorized

    Tags:

    Article by: Nicholas Covington

    I am an armchair philosopher with interests in Ethics, Epistemology (that's philosophy of knowledge), Philosophy of Religion, Politics and what I call "Optimal Lifestyle Habits."