Pages Menu
TwitterRssFacebook
Categories Menu

Posted by on Jan 8, 2014 in God's Characteristics, Interviews, Mathematics | 7 comments

New Books in Secularism Interview

 

As I am sure you are by now aware, I edited and published James A. Lindsay’s Dot, Dot, Dot: Infinity Plus God Equals Folly on my Onus Books skeptical imprint. It has received some great reviews and received a foreword by Victor Stenger.

New Books in Secularism is part of the New Books Network which interviews authors about their books. Their piece on this:

In the depths of the internet there is many an article discussing the infinity of God.  Its authors argue that God is infinite and endless and knows no bounds (what the difference is among those attributes is not usually explained).  Imputing infinity to God is nothing new – one rarely (if ever) hears of a god that is deemed finite.  In his new book,  Dot, Dot, Dot: Infinity Plus God Equals Folly (Onus Books, 2013), James Lindsay argues that declaring God to be infinite is no help to the arguments of believers.  Infinity is a concept that almost everyone except mathematicians misunderstands, which doesn’t stop apologists from using the adjective to label their god.   Arguing against Platonism, Lindsay explains that infinity is an abstraction, and that abstractions are not equal to reality.  He has no objection to the notion of God as an abstraction, but decries the point of view that this necessarily implies existence.  Words and numbers are abstractions which we use every day, but no one would argue that they are real they way that a table is real.  Human beings, Lindsay argues, invented these abstractions in order to make sense of the universe, and they are limited to the human mind.  Apologists who use the concept of infinity as a way to argue for their god are, as the author puts it, “confuse the map for the terrain.”

Please check out the podcast – listen away!

  • LukeBreuer

    Arguing against Platonism, Lindsay explains that infinity is an abstraction, and that abstractions are not equal to reality.

    This is fascinating. All that science does is come up with useful abstractions that, while not real, increasingly approximate reality. They are never ‘the terrain’—they’re only a map—but I’m not convinced we don’t treat them as ‘more real’ than other abstractions. It seems like we do. Furthermore, it seems like we’re ‘approaching’ something in science. Our abstractions which don’t exist are becoming ever-more-like something which does exist: reality.

    The command to not confuse a picture of a thing with the thing shows up in the Ten Words, of all places:

    “You shall not make for yourself a carved image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth. You shall not bow down to them or serve them, for I the LORD your God am a jealous God, visiting the iniquity of the fathers on the children to the third and the fourth generation of those who hate me, but showing steadfast love to thousands of those who love me and keep my commandments. (Ex 20:4-6

    Don’t make carved images and call them ‘God’. Don’t come up with likenessesmental representations—and call them ‘God’. Don’t call a model of a thing the thing. Ceci n’est pas une pipe.

    In a sense, human experience deals with nothing but approximations. When you scan your visual field, only an approximation exists in your brain. Scientists rely on the ability to ignore irrelevant bits and find patterns in the relevant bits. We see through a glass dimly.

    If all we ever actually do is deal with approximations and abstractions, what do we mean by ‘real’? It seems odd to call something “just an abstraction”. What exactly is the function of that word, ‘just‘? Does it merely mean there is more to understand? Ahh, but that is one definition of ‘unfathomable'; God is described as unfathomable.

    Contrary to the statement that reality is just particles and fields, I would say that any attempt to say it is just X is an abstraction which deprives it of its full glory. It’s an attempt to put it in a box and say that I unders

    • http://www.skepticink.com/tippling/ Jonathan MS Pearce

      I’m really interested in this field of philosophy, though it can get really complex and dry when followed right to the hardcore end (Quine, trope theory etc).

      But I think there are an awful lot of things which people think are real, ontologically speaking. But I wager they are nothing more than concepts which attempt to approximate reality, understand it. But often these are anthropomorphic human ascriptions.

      I tend to deny ontological reality of abstracts, favouring them to be individual conceptions which might just happen to be agreed upon across a section of people.

      Morality is a good example. The more I think about it, the more I favour moral nihilism insofar as it can only possibly be a moral conception. I believe in universal subjective morality, but do not think this qualifies as objective. But objective, being mind independent, requires some kind of Platonism which is, imho, absurdly unlikely and implausible.

      • LukeBreuer

        What interests me is what the difference is, in people’s actions, between various different ontologies. Which one is ‘correct’ seems only definable by some purpose. If the goal is to discover as much lawfulness of reality as possible—scientific, moral, whatever—then I suspect different ontologies will have differential success. Whether there is a total ordering will be fascinating to discover.

        Your note about moral nihilism I find fascinating. I’m partway through Moral Vision and it strikes me that different moralities or ways of setting values (are these different?) allow for different outcomes. Imagine that we can distinguish particles and fields instead of particles really being bound states. Imagine minds representing the particles, and morality representing the fields. Now, in the spirit of the fine tuning argument, imagine futzing with the fields and how that would result in different possible and impossible (or likely and unlikely) particle arrangements. In this way, I see morality as very much teleologically dependent, just like how ‘science’ is defined by purpose.

        Certain moralities do not allow even the conducting of modern science. If people do not follow certain rules in interacting with each other, science just wouldn’t work. Alasdair MacIntyre talks about this in After Virtue in what is required for traditions to be sustained, but it’s also obvious to anyone who knows a few things about how science operates.

        A social scientist is currently studying my wife’s lab, and has noted a potential slowdown in the pursuit of science. It has to do with the increased need for more specialties to collaborate to accomplish a given task. For example, my wife needed to develop an alignment algorithm for doing FRET microscopy. Her brother, who is getting his PhD in computer science and applied math, could develop said algorithm in an hour or two. Even with his help, it has taken up more than eighty hours of her time. What if she hadn’t had him as a brother? There is so much wasted time, due to the cost of finding someone who will help you with ‘minor’ things like this. If enough trust and other supporting systems cannot be set up to facilitate this kind of ‘expertise exchange’, the progress of science and engineering will slow down, as systems complexity increases.

        There is a famous web page called How To Ask Questions The Smart Way, which resulted from experienced programmers (or ‘hackers’) getting frustrated with inane questions by newbies. On the one hand, the smart, experienced hackers wanted to help people. On the other hand, it really frustrated them when newbies would be lazy, making the hackers do a lot more leg work to help the newbies. The lazier the newbies are, the less total help a hacker can provide in his lifetime. I think one can go as far as to say that there is ‘morality’ to asking intelligent questions that give respect and dignity to the person being asked. And, it is beyond a doubt that how questions are asked will determine what futures can be created and how quickly.

        So, unless you can say there is no structure, no order, no lawfulness to human desires (or the desires of minds), there is a structure to morality which must mirror those desires. For desires result in purposes and purposes require sufficiently good moralities. And so I think there is order and lawfulness where many claim there to be 100% subjectivity. It’s as if the fact that not everyone sees everything the same way means that there is no lawfulness. Imagine if someone said that with respect to different people’s physiologies responding differently to a given medicine. That person would be laughed at if he said our physiologies are ‘100% subjective’. And yet people aren’t laughed at when they claim this in the mental realm. Color me intrigued. I guess if you don’t think there is order/lawfulness somewhere—past a certain amount—you often cannot see it.

        • James Lindsay

          Not quite on the same level of elaboration as you fellows, I often wonder about the following moral question with regard to scientific research. It’s easiest to conceptualize in medical science, so I’ll use that to frame it, but it’s applicable more widely pretty easily by analogy.

          I think most of us agree that we have a moral imperative to do whatever we can to reduce the suffering of others as well as we might. Further, I think we have a salient notion that likely soon-to-be treatments that would reduce suffering, say by curing certain cancers or regenerating lost or damaged limbs, tissues, or organs, carry with them a sense of being within our scope of practical methods by which to do so. This, then, creates a moral imperative to do (medical) research as quickly as we may. (For example, if Sue would die of cancer in a week, but in two weeks she would have a high likelihood of successful treatment, there’s a strong sense of obligation to try to cut into the time to treatability as rapidly as possible–and there will always be Sues of this kind.)

          The problem is that it takes effort and energy to invest in research, and as you note, Luke, as the problems get harder and more complex, in many respects the necessary input to keep up appears to increase over time. But we know that overwork or, more generally, overreaching our available resources, is net detrimental–it causes suffering. Indeed, in medicine, this is much easier to see. If we all have to work harder to fund and execute the research that brings about cures we’ll benefit from (soon enough to do so), then we all incur injury of a kind to do that work.

          In short, at some point there may have to be a balancing point, which may be discoverable, at which the necessary input to keep pushing research forward at an increasing rate is too detrimental to justify more broadly. That is, we might be making people unwell pushing them so hard to be able to make people more well.

          I think it’s a fascinating subject to think about–does such a critical point exist, have we already passed it, when will we pass it, how can we know when we are approaching it or passing it?

          Anyway, not really related, but just something to toss out for the fun of it.

          • LukeBreuer

            I think it’s a fascinating subject to think about–does such a critical point exist, have we already passed it, when will we pass it, how can we know when we are approaching it or passing it?

            I would say that the problem of evil gives a fascinating answer from the Christian position: we’re not doing enough as long as suffering is not decreasing ‘appropriately’ (see: all the problems of utilitarianism). There’s a parable in Luke 12:41-48 which ends this way: “to whom much was given, of him much will be required”. So I’m defining the critical point as what is required to actually solve the problem, not as what will make people run-down. This might seem insane.

            One of the things that’s just utterly fascinating about the human mind is how much it is capable of. What’s scary is that these capabilities seem especially unlocked when the human mind is exposed to terrible pain and suffering, and responds with good, instead of just re-projecting all the evil over and over again. One of my friends, who has become a fantastic scientist, was witness to his mother shooting herself in the head when he was five years old. There is reason to believe she was a victim of Project MKUltra, the United States’ most well known human experimentation program. Somehow, this and other evil experiences he had provoked an extremely strong willingness in him to make the world a better place. It’s as if in this case, evil produced the solution to itself.

            My wife tells me that while she was taking a computer science course, she pulled one all-nighter a week, to solve its problem set. She didn’t wait ’till the last day to do it. She was brute-forcing the problems; if you know anything about programming, a big part is to work smart and not just hard (adverbs be abused). I’m a pretty decent software engineer, and therefore I have extensive experience of people who waste lots of time and energy (especially what I call ‘frustration energy’, which is probably a kind of emotional energy) doing things the hard way when there is a known, easier way.

            So, how do we know that people are doing the best they can (Christianese: “work as unto the Lord”), and not being lazy? How do we know that we’re at that ‘critical point’? My answer is when everyone is in a position to point out inefficiencies and better ways to do things, and be listened to. I think that God created the world such that if we really, deeply respect each human being’s ability to be creative and actively contribute to the total good of the world in a non-replaceable-cog fashion, that ‘critical point’ will be at a place such that we set ourselves on a trajectory to reduce suffering and pain to zero or near-zero (people may still stub their toes). In such a world, the motivation to discover reality and ‘make things better’ would come not from pain, but from curiosity that is allowed to flourish by mutual respect for one’s fellow human being.

            My solution to this ‘critical point’ problem is in direct opposition to any sort of planned economy or society (e.g. BF Skinner’s Walden Two), where the masses are treated as ignoramuses compared to the elite. Indeed, there would be no elite; people would merely have different talents. Furthermore, I think pursuit of my solution will result in a series of moralities which seem to ‘approach’ something, just like the math in scientific equations seems to ‘approach’ something. To the extent that a single human being is deemed to be of less value than another, I don’t think you have a ‘morality’ anyway. :-p

        • http://www.skepticink.com/tippling/ Jonathan MS Pearce

          But you kind of prove my point:

          “Imagine if someone said that with respect to different people’s physiologies responding differently to a given medicine.”

          In other words, there is no ontologically, min-independent version of a perfect physiology. That is Platonism, such that there is an ideal human form. This is ridiculous in light of many things, notably evolution etc.

          But things cannot be ‘prefect’ intrinsically, which you seem to imply. There is no such things as perfect, only perfect for. Things are valued against goals. This is what a moral ought is. You have no intrinsic ought, only an ought wrt a goal (protasis AND apodosis).

          I ought to change the oil on my car.

          This hides the protasis, “If I want my car to run well, then…”.

          Of course, if I was a scientist investigating the effect of not putting oil in a car, then I ought not put oil in my car.

          Thus ought statements depend on goals.

          This is morality. Indeed, morality itself depends on axioms. The goal of moral theories, then, is to coherently explain argue for axioms. This is whay non-derivative values are important. They act as axioms (eg happiness) which underwrite such accounts.

          • LukeBreuer

            In other words, there is no ontologically, min-independent version of a perfect physiology. That is Platonism, such that there is an ideal human form. This is ridiculous in light of many things, notably evolution etc.

            That wasn’t what I meant to communicate at all. I meant to say that different physiologies which react to a given medicine differently doesn’t indicate that physiology is 100% subjective. No, we believe that there are scientific reasons for why there is a different reaction. And yet, when it comes to the realm of minds and what they desire, the in vogue thing to do is to say that these desires are 100% subjective. Because he likes chocolate ice cream and I like strawberry.

            Ironically, your belief in CFW would seem to do some damage to what the word ‘subjective’ can mean. And what I wonder is whether there is truly ‘structure’ or ‘order’ to people’s desires, such that there is ‘structure’ or ‘order’ to the morality which would allow those people’s desires to be satisfied. This would make morality dependent on the human psyche, or perhaps some more general ‘laws of the mind’, if we assume that one rational+emotional mind is to another as one Turing machine is to another. Different language and different computing architecture, but the same computation.

            I don’t know what to think about the problem of universals; I don’t think I understand it well enough, yet. Is the electron a universal? Is every electron truly the same as every other? Our physics currently assumes that, but perhaps we’ll find subtle differences at some point. A fundamental aspect of human thought seems to be the ability to approximate things with universals—to come up with categories. But I really do not know my way around this topic. I find it mega-confusing.

            But things cannot be ‘prefect’ intrinsically, which you seem to imply. There is no such things as perfect, only perfect for. Things are valued against goals. This is what a moral ought is. You have no intrinsic ought, only an ought wrt a goal (protasis AND apodosis).

            I have no problem with this, because I think purposes exist, because I think mind is a fundamental building block of reality and not just an emergent property. :-p

            Now, different religions seem to view perfection differently. Some see unification with ‘God’ and a shedding of one’s illusory personhood is the ultimate good. Christianity, on the other hand, maintains both a unity and a diversity. Unlike electrons, humans are not indistinguishable; there is that stone with a name written on it that only its owner knows. 1 Cor 12 and Romans 14 make it clear that we ought not expect all people to be identical, and to the extent that we think some are unnecessary [e.g. for the running of society], those people are actually critical (IMO, this is because if they have a function and you are so ignorant that, if they were to disappear, that function would go undone and bad things would eventually happen). There has been a lot of ‘us vs. them’ in Christianity, but I believe that the ultimate goal is the minimal amount of unity that keeps people from wanting to do each other wrong, with the maximal amount of diversity, but not enough to shatter the whole group and result in warring factions.

            If I think the above is the ideal state of human-human interaction, do I think there exist universals? I have no clue!