• The correlation of belief in free will and paranormal beliefs

    This is really interesting, and whilst it doesn’t prove anything particularly in and of itself, it does hint at a connection between more ‘out there’ irrational beliefs and free will, which, in my opinion, is equally irrational.

    Here is the abstract from the Journal Frontiers in Psychology:

    Free will is one of the fundamental aspects of human cognition. In the context of cognitive neuroscience, various experiments on time perception, sensorimotor coordination, and agency suggest the possibility that it is a robust illusion (a feeling independent of actual causal relationship with actions) constructed by neural mechanisms. Humans are known to suffer from various cognitive biases and failures, and the sense of free will might be one of them. Here I report a positive correlation between the belief in free will and paranormal beliefs (UFO, reincarnation, astrology, and psi). Web questionnaires involving 2076 subjects (978 males, 1087 females, and 11 other genders) were conducted, which revealed significant positive correlations between belief in free will (theory and practice) and paranormal beliefs. There was no significant correlation between belief in free will and knowledge in paranormal phenomena. Paranormal belief scores for females were significantly higher than those for males, with corresponding significant (albeit weaker) difference in belief in free will. These results are consistent with the view that free will is an illusion which shares common cognitive elements with paranormal beliefs.

    Mogi K (2014) Free will and paranormal beliefs. Front. Psychol. 5:281. doi: 10.3389/fpsyg.2014.00281

    Received: 15 January 2014; Accepted: 17 March 2014;
    Published online: 02 April 2014.

     

    RELATED POSTS:

    Category: Free Will and DeterminismPhilosophyPsychology

    Tags:

    Article by: Jonathan MS Pearce

    One Pingback/Trackback

    • Luke Breuer

      If some sort of weak interactionist form of dualism is true, with paranormal beliefs having to do with those very interactions, and assuming my small ∆v model of LFW, then we can understand how disbelief in LFW and disbelief in paranormal beliefs would go together: simply stay away from Lagrangian points and the like, such that the ‘weak interaction’ is never allowed to cause discernible effects (think of the inability to ever meaningfully change course on the Interplanetary Transport Network). You never have meaningful free will, and perhaps this would lead to never observing paranormal effects. You become a slave to determinism, forever divorced from the spiritual realm. Bahaha, this is so fun.

      To stir the pot, I shall add the Huffington Post’s ‘Presentiment’ Study Suggests People’s Bodies Can ‘Predict Events,’ But Scientists Skeptical, published 11/05/2012.

      • kraut2

        “If some sort of weak interactionist form of dualism is true, with
        paranormal beliefs having to do with those very interactions, and
        assuming my small ∆v model of LFW, then we can understand how
        disbelief in LFW and disbelief in paranormal beliefs would go together:
        simply stay away from Lagrangian points and the like”

        After all that – can you please state in clear english what are you trying to say? Or is it piffle cloaked in a mantle of deepities?

        • Luke Breuer

          Nope, if this is how you treat what I’ve said, I have no hope that we’ll reach common understanding. I do not wish to try, unless you’re going to approach my ideas with more respect.

          • Void Walker

            Somehow I knew you’d be the first to comment on this post… :-p

          • kraut2

            You earn respect – I do not respond well to demands. If you use clear and unambiguous language – then I will treat your ideas with the respect if they deserve it. Your post seems to have one goal – to obfuscate, or follow the tradition of jargon laden philosophical dispute that tries to keep the big unwashed from understanding what is meant.

            • Luke Breuer

              Why would I desire to earn your respect? You’re good at tearing down people’s ideas, showing how you think they’re ugly and false and terrible. But can you enhance ideas that aren’t perfect, and make them better? My memory of you isn’t fantastic, but I cannot recall you displaying this ability, especially when you talk to people vastly different than you. There are many, many people who can tear down; not so many can build up and enhance.

            • Void Walker

              Honestly, Kraut has been sour (haha….) to me, as well.

      • Andy_Schueler

        – We could grant easily grant you dualism for the sake of the argument, it doesn´t solve any of the conceptual problems you have wrt LFW in any way, shape or form.
        – I´ll lean out of the window and assure you that the alleged relevance of Lagrangian points for this issue is a mystery for literally anyone but you.
        – It is not either libertarianism or determinism. And you know that.
        – “Never have meaningful(!) free will”? Well, first of all, it is the height of arrogance to assert that what is not meaningful to you cannot be meaningful to anyone else, and second, “meaningful” implies that you actually know what it means, but you don´t. We asked you countless times how your conception of free will is any different from a human will given indeterminism and you cannot explain how your conception is any different from indeterminism. That is not “meaningful(!) free will”…
        – Whether such a “weak interaction” which causes “discernible(!) effects” exists or not, is an empirical question, and it has already been answered. All particles and forces that are of relevance for “everyday physics”, i.e. the conditions under which the human brain operates, are known. See for example Sean Carroll´s explanations here [1] [2] [3]. I already know your likely reaction to positions like the one Carroll esposes in those links – a form of radical skepticism that is completely ad hoc and that you´ll simply forget as soon as you talk about a different issue.

      • Andy_Schueler

        And one more thing regarding “meaningful free will”.
        Please read this:
        http://www.informationphilosopher.com/freedom/problem/
        http://www.informationphilosopher.com/freedom/adequate_determinism.html
        http://www.informationphilosopher.com/freedom/requirements.html
        http://www.informationphilosopher.com/freedom/responsibility.html
        – and explain what you find “not meaningful” or problematic about it, and how your idea of what free will means is any different from it.

        • Void Walker

          Hmm…nice website. Bookmarked.

        • Luke Breuer

          To others, I might. But to you? Why would I try? Your sole purpose in talking to me is to point out everything you think I have wrong. At least, that models you really, really well.

          • Andy_Schueler

            Good. Your sole purpose is to spam your canned responses on as many threads as possible. At least that models you really, really well.

          • Void Walker

            Luke, how can you still hold onto a belief that humans possess free will, and are first cause agents, after all the incoherencies that have been pointed out? Do you honestly feel that the evidence for LFW is THAT strong, or are you merely defending the concept because it is a supporting structure for your faith? Lets face it, robbed of LFW much of your belief system crumbles.

            • Luke Breuer

              This response took some thinking.

              I have many reasons to consider something LFWish—something ¬CFW ∧ ¬DW—to be a better model of reality than either CFW or DW. Furthermore, the mere act of defending LFW (mostly on this blog) has helped me understand these issues more clearly.

              1. One reason we haven’t talked about is based on Augustine’s Si […] fallor, sum. I err, therefore I am. But what is ‘erring’ for a being who has no control over his/her actions? What is ‘erring’ for a being who has no telos, no final goal to thwart? There is only ‘like’ and ‘dislike’, which animals also have.

              I do worry that the above is dangerously similar to Plantinga’s EEAN. Still though, consider the statement “I chose to be correct.” What is ‘I’ (Jonathan doesn’t believe in “a continuous I”, for example), and what is ‘choose’? I worry that without something LFWish, those terms cannot even get off the ground in a fully coherent manner. We want to think that we’re right because we worked hard for it, not because it just happened that we’re right.

              2. Another reason is that subjectively we do have free will. I’m reminded of Noam Chomsky’s The Case Against B.F. Skinner, which might be summarized as Skinner thinking he has free will and nobody else does, and therefore he can use technology to manipulate them. There is a fundamental asymmetry if I act as if I have free will, but treat others as if they don’t. I don’t know quite what to do with “subjectively we have free will, objectively we don’t”—a claim a DWer would need to assert.

              3. Yet another reason is that I don’t know a good way to separate the personal from the impersonal without something LFWish. A CFW/DW deity would be Spinoza’s God, or at least Einstein’s God. (From Baruch Spinoza: “Spinoza equated God (infinite substance) with Nature, consistent with Einstein’s belief in an impersonal deity.”) If we are all just impersonal beings, then why is it bad to turn off some machines and not other machines? Human civilization has thrived at times where some organisms with human DNA could be turned off whenever some of the other organisms desired. That’s still true in perhaps the majority of the world, today, albeit in some places with only strong desire.

            • Andy_Schueler

              But what is ‘erring’ for a being who has no control over his/her actions?

              You have control over your actions given any of the options of LFW, CFW, DW. The negation of the claim that you “have control over your actions” would necessarily mean that you´d either a) behave 100% erratically or that you b) are controlled by someone else.

              We want to think that we’re right because we worked hard for it, not because it just happened that we’re right.

              Yeah… So?

              Another reason is that subjectively we do have free will.

              Not exactly. Subjectively, we do make choices. This is true in any case (assuming that you do not behave 100% erratically and are not controlled by someone else). Making choices is not the same thing as having free will in a libertarian sense.

              Yet another reason is that I don’t know a good way to separate the personal from the impersonal without something LFWish.

              Then you do not know how to seperate the personal from the impersonal, period. If you rely on LFW to make this distinction, then you would need to give a coherent account of what LFW actually means. But this you cannot do. So, if you say that you cannot seperate the impersonal from the personal without it, then you cannot make this seperation at all – even if we assume that human beings have libertarian free will.

            • Luke Breuer

              You have control over your actions given any of the options of LFW, CFW, DW. The negation of the claim that you “have control over your actions” would necessarily mean that you´d either a) behave 100% erratically or that you b) are controlled by someone else.

              Suppose your point holds: does it affect my 1.? What is ‘erring’? And let’s look at ‘control’: what does it mean to control things when you made no choices? And if you claim that you did make choices, what is a ‘choice’, under your model of the will?

              Yeah… So?

              In the sense I used it, “just happened that we’re right” ⇒ being right was not the result of what we saw as reliable truth-directed choices. Given this “not the result of”, the result of said truth-directed choices could equally as likely have been falsehood. And then how would we know whether we’re right or wrong? In some cases we can (e.g. solutions to NP-complete problems), but not always, and perhaps not mostly.

              Not exactly. Subjectively, we do make choices.

              Do we make choices, or do we observe choices being made? This goes back to my need for a definition of ‘I’ and ‘choose’. It’s not at all clear that my own definitions cohere under an assault by CFW or DW. If ‘I’ am merely a fantastically complex computer program, then I am not responsible for any of the input, nor the actual program, and therefore I don’t ‘choose’ the output. The only ‘I’ which results in a meaningful definition of ‘choose’ is an ‘I’ associated with ‘noise’ fed into the computer program. And even then, ‘choose’ is only valid when the ‘noise’ determines which of the viable choices was chosen. Otherwise the ‘noise’ was irrelevant, and ‘I’ did not ‘choose’.

              Incidentally, I’ve long been thinking of SELO in place of ‘noise’ above, and when I mention Lagrangian points, I mean those points where the noise actually causes a different course to be taken than would have otherwise been taken. Or: a small ∆v which made a meaningful difference.

              Then you do not know how to seperate the personal from the impersonal, period. If you rely on LFW to make this distinction, then you would need to give a coherent account of what LFW actually means.

              Did you attempt to describe what ¬CFW ∧ ¬DW is? That would be my best guess at what I’m actually using when I’m using LFW. As to your ‘know’, you might be right: I might not be able to articulate it to you. But that doesn’t mean it’s not something sensible in my own mind; people cannot always properly articulate, especially to people who believe quite differently.

            • Andy_Schueler

              Suppose your point holds: does it affect my 1.? What is ‘erring’?

              To make a decision that turns out to be either factually incorrect or a decision that leads to something different from what you anticipated. And that seems to me to be something completely independent of any notion of “will” that people came up with so far.

              And let’s look at ‘control’: what does it mean to control things when you made no choices?

              Dunno, maybe nothing. But who denies that people make choices?

              And if you claim that you did make choices, what is a ‘choice’, under your model of the will?

              Actualizing one of two or more potential alternatives. What else could it be?

              In the sense I used it, “just happened that we’re right”⇒ being right was not the result of what we saw as reliable truth-directed choices. Given this “not the result of”, the result of said truth-directed choices could equally as likely have been falsehood.

              I honestly have no idea what you are talking about. I am in the third floor right now, and if I want to go outside and figure that using the stairs is a better option than jumping out of the window, what does it mean to say “it just happened that I was right”. “Just happened” as opposed to what? And it would have been equally likely that jumping out of the window would have been the right choice?? That cannot be what you mean. So I guess you have to come up with a concrete real-world example here, in the abstract, I have no clue what you are even talking about.

              Do we make choices, or do we observe choices being made?

              Both I´d say. What is the alternative?

              Incidentally, I’ve long been thinking of SELO in place of ‘noise’ above, and when I mention Lagrangian points, I mean those points where the noise actually causes a different course to be taken than would have otherwise been taken.

              I´d say it is precisely the other way around. The noise doesn´t push you in any direction, it merely opens up potential alternatives and your will has the final say in determining which of those alternatives will be actualized.

              Did you attempt to describe what ¬CFW ∧ ¬DW is?

              Let me answer that with a question:
              Do you, or do you not, believe that you can freely choose out of the blue for no reason at all to believe that Jesus actually didn´t set a good moral example, but rather an average or even a bad one, while simultaneously being able to claim ownership for that free choice that you made out of the blue for no reason at all?
              Yes or no? If yes – well, that is what libertarianism would mean, and it doesn´t survive critical scrutiny because it is a transparently self-refuting concept. If no – then we do not disagree here.

            • Luke Breuer

              To make a decision that turns out to be either factually incorrect or a decision that leads to something different from what you anticipated.

              What is “factually incorrect”? F = ma is “factually incorrect”, except in a certain realm (which we cannot even fully define, except as a subset of our experiences). The best way I can formulate a conception of ‘to err’ based on what you’ve said is “to fail to have efficacy of the will.” To not get what you want. One of the things we want is to properly anticipate the future.

              Intersubjective agreement doesn’t really help you out here with “factually incorrect”, by the way. One person can be deluded and a society can be deluded. Once again I quote Planck: “Science advances one funeral at a time.” I don’t see how to escape the idea that all you’re talking about here is efficacy of the will. And yet, you didn’t decide what your will would be. According to you, you did not will what you will. And thus, “factually correct” is merely a function of obtaining that which you happen to want.

              Actualizing one of two or more potential alternatives. What else could it be?

              We don’t say that a radioactive atom chose to decay or not decay. So all you’re doing here, is pushing the definition back from ‘choose’ to ‘actualize’. Do we say that silicon computers ‘choose’ anything? That seems very odd to me. A silicon computer is not an ‘I’, and neither is a radioactive atom. So why are humans ‘I’s?

              I honestly have no idea what you are talking about.

              Then I suggest we drop this tangent. There is a theory of justification of belief that says we are justified in believing something iff the way we believed it was valid. I undercut “the way we believed it was valid” with “just happened that we’re right”; you didn’t see this as relevant. But perhaps you’re not interested in talking about this topic.

              Both I´d say. What is the alternative?

              Observing without choosing, of course. Surely you’ve encountered in fiction the idea of being possessed such that you see everything your body is doing, but aren’t able to cause your body to do (or not do) anything? Instead of being possessed by another being, under DW we are possessed by randomness conditioned by laws. It’s hard for me to see room for both ‘I’ and ‘choose’ under DW.

              I´d say it is precisely the other way around. The noise doesn´t push you in any direction, it merely opens up potential alternatives and your will has the final say in determining which of those alternatives will be actualized.

              And yet, your will isn’t “saying” anything, it’s merely deterministically consuming input and producing output. It is a slave to randomness when near Lagrangian points. When not near Lagrangian points, it is a slave to prior states of affairs. The will is just a computer program according to you, isn’t it? It only ‘chooses’ in the sense that a computer ‘chooses’.

              Do you, or do you not, believe that you can freely choose out of the blue for no reason at all to believe that Jesus actually didn´t set a good moral example, but rather an average or even a bad one, while simultaneously being able to claim ownership for that free choice that you made out of the blue for no reason at all?

              I honestly don’t know. I can say with some confidence that I am truly responsible for some of my actions, in the sense that I could have acted differently. Perhaps what I’m saying here is that I am an instance of SELO. Or at least, part of me is SELO. You want to associate ‘I’ with the computer program; I want to associate ‘I’ with the ‘noise’ fed into the computer program. Now, over time, the ‘noise’ will condition the computer program (the computer program can be rewritten as a function of noise), so I really would consider part of the computer program to be ‘I’. I think this disagreement (I think it’s disagreement?) is worth exploring.

            • Andy_Schueler

              What is “factually incorrect”? F = ma is “factually incorrect”, except in a certain realm (which we cannot even fully define, except as a subset of our experiences). The best way I can formulate a conception of ‘to err’ based on what you’ve said is “to fail to have efficacy of the will.” To not get what you want. One of the things we want is to properly anticipate the future.

              Again, I have no idea what you are talking about. If I figure that a good way to go outside would be to jump out of the window instead of using the stairs, I´m reasonably certain, to put it mildly, that my “error” would quickly become apparent. Similarly, if I am an engineer and figure that I don´t need to correct for relativistic effects when working on a global positioning system, my “error” would also quickly become apparent. I can only guess that your point here is, that it is impossible to know whether final and absolute truth claims are just that – final and absolute. How this is supposed to be relevant for anything here is a complete mystery to me though.

              We don’t say that a radioactive atom chose to decay or not decay

              Yes, because an atom is not sentient, doesn´t have any preferences about whether it wants to decay in a given situation or not, and no will that could determine decay or no decay even if they were sentient and had any preferences in this respect. That the concept of “choice” is meaningless for an atom seems to be a remarkably trivial insight – “desire” also loses all meaning when you ask a question about “what atoms would desire”, how you would draw any conclusions from that to human desires is a mystery to me.

              Do we say that silicon computers ‘choose’ anything? That seems very odd to me. A silicon computer is not an ‘I’, and neither is a radioactive atom. So why are humans ‘I’s?

              If you could have a conversation with your computer about this very subject here, a conversation that is no less meaningful than a comparable one that you had with your wife about the subject, would you still think that this question you pose here would be meaningful?

              Then I suggest we drop this tangent. There is a theory of justification of belief that says we are justified in believing something iff the way we believed it was valid. I undercut “the way we believed it was valid” with “just happened that we’re right”; you didn’t see this as relevant. But perhaps you’re not interested in talking about this topic.

              It´s not a lack of interest. It is simply completely unclear to me what you are talking about. That´s why I asked you to come up with a concrete real-world example. When I translate what you say here into a situation in the real world, e.g. my example re going outside via walking down the stairs or jumping out of the window, I end up with nonsense, so I´m reasonably certain that I do not understand what you are trying to convey here. Could you please translate what you are saying from the abstract to the concrete with a real-world example?

              Observing without choosing, of course. Surely you’ve encountered in fiction the idea of being possessed such that you see everything your body is doing, but aren’t able to cause your body to do (or not do) anything? Instead of being possessed by another being, under DW we are possessed by randomness conditioned by laws.

              There is no true randomness in determinism. And how is your SELO any different from randomness + laws?

              And yet, your will isn’t “saying” anything, it’s merely deterministically consuming input and producing output. It is a slave to randomness

              Quite the other way around, it is randomness that gives my will the freedom to chose between true alternatives. I didn´t choose my will, true – but how you would define “personhood” if your will is something that you can freely chose, is a mystery to me. So assuming that you could freely chose to make your will more similar to that of Jesus or that of Hitler, why would you chose one over the other? Note that your will and anything related to it is obviously no longer a candidate for an answer here, since this is exactly what you try to freely choose. Similarly, the choice cannot be anything from the outside, since it is YOU that is freely choosing. It also cannot be a choice that happens for no reason at all, because this would simply mean that some people freely choose to be more like Hitler for no reason whatsoever while others choose to be more like Jesus for no reason whatsoever – immediatly killing any notion of responsibility and individuality / personhood. So, what is left as an answer for why you freely choose to make your will more similar to, say, Jesus?

              The will is just a computer program according to you, isn’t it?

              Of course not. The concept of a will becomes meaningless without sentience.

              I honestly don’t know. I can say with some confidence that I am truly responsible for some of my actions, in the sense that I could have acted differently.

              So can I.

            • Void Walker

              Dammit, Andy stole my thunder.

            • Luke Breuer

              I’d still be interested in your response. There are some ways in which you two view this issue differently.

            • Void Walker

              1.

              It seems to me that one reason you subscribe to LFW is to maintain culpability. Relating to your faith, can you imagine how much damage would be done to your conception of Yahweh, and the fall, if LFW was eviscerated? Would this not render Yahweh guilty of all the evils in the world?

              2.

              While humans are pretty convinced of their free will, and it just intuitively makes sense, you must remember that many, many times we feel that we can do certain things, or feel that reality is a certain way, with zero supporting evidence behind said feelings. For all our triumphs as a species, we also must bear the burden of countless inequities and mistakes.

              3.

              I feel you here, Luke. I appreciate your honesty. But it seems to me more an appeal to emotion than a reasoned response (no offense meant, trust me), or evidence of LFW. It strikes me as more “The world *ought* to be A, instead of B”, as opposed to “Here are some very good reasons that the world is A, rather than B”.

            • Luke Breuer

              1. Culpability is moral; I’m talking truth, here. But no, this wouldn’t render Yahweh guilty, it would eviscerate the very idea of guilt (or forces a sociological definition of it). If we logically cannot have LFW, neither can Yahweh.

              2. If you kill too much common sense, you undermine the very grounds by which you killed common sense. You dig out the foundation from underneath you. You end up with a really long chain of reasoning whereby the conclusion contradicts a premise.

              3. Tell me why it is ok to turn off some machines and not other machines. This ain’t emotional, it’s practical. It questions the very foundation of morality and ethics.

            • Void Walker

              1. Why are we equated to Yahweh? Can you not imagine a world in which we are fully determined, but Yahweh is not? A puppeteer of sorts? I question your imagination ;-)

              2. Still, this doesn’t fundamentally address my contention. Grounding a belief in LFW because it just *feels* right is not a sufficient argument in favor of it’s existence.

              3. Why should I do this? All that I pointed out was that an appeal to emotion is hardly a cogent response to a question built by, and requiring, evidence to ground it.

              Actually, we can examine mass extinctions and pose this very question to God. “God, why did you ‘turn off’ so many of your creations? Why did you use evolution to create life, when the history OF life shows that evolution necessarily entails death.extinction?”

            • Luke Breuer

              1. Sure, Yahweh could have LFW, we could not, and thus he’d be guilty. But you’re saying LFW is inherently contradictory. In which case Yahweh cannot have it. Then he’s not guilty.

              2. But I don’t ground LFW “because it just *feels* right”. For you to claim that I do (instead of just offering up an “if”) would be for you to disregard much of what I’ve said! You’re welcome to do that, but it’ll be a tangent-ending action.

              3. Without emotion, do we even have purpose? I get that appealing to just emotion is bad. But perhaps we’re screwing up by not differentiating between short-term emotion and long-term sentiments. As to the “which machines can you turn off” question, it is vitally important, and I don’t see how it has an objective answer without something LFWish.

            • Void Walker

              1. Yes, I do believe that LFW is contradictory and incoherent. But, as I’ve said before, I’m operating from the assumption that LFW is *not* contradictory when I make such claims addressed towards you. I do so because I want to see what your solution is. Does this make sense?

              2. I never said that the *only* reason for your belief in LFW is that it just feels right. It seems to be one of the reasons, however. Have I misunderstood your position?

              3. I honestly don’t believe in the same kind of “purpose” that you do. I do agree that, robbed of emotion, any sense of purpose would quickly crumble. When you say purpose, what, exactly, do you mean? I believe that we find our own meaning in life. One mans purpose can be the very antitheses of another mans purpose. We define our own “destinies”, in my view. What is your view, specifically? We’ve never discussed this :-)

            • Andy_Schueler

              Sorry ;-)

            • Void Walker

              :-D It’s okay, I’m still gonna jump in with you. The ocean of Luke is best traversed with a buddy :-p

        • Void Walker

          Hey, Andy. I’m stalking Luke >:-D Wanna know his current blogabouts? I really miss watching you shred him. A lot. It was cathartic for me (atheios knows why).

    • daasdsad

      I read one that there are not many philosophers believe in LFW; so are most philosophers compatibilists? Or determinists or indeterminists?

      • The devil is in the definition. Most deny LFW. Adherents to LFW broadly correlate with belief in God. Most are compatibilists, who can be defined as soft determinists, which means they believe in deterrminism, but that it can be compatible with free will. Only that free will needs to be redefined to not mean what most people understand it as. I set this out in my book on free will, available over there on the sidebar.

        I would class myself a compatibilist if I agreed on the definition of free will, I just think that that definition used by most compatibilists should be called volition or something.

    • Pingback: Correlación entre creencia en el libre albedrío y creencias paranormales | Teocidas.com()