• DNA is Not Like a Computer

    A pro-ID poster (I assume he’s pro-ID, he’s never actually said anything that supports ID), has made the following claim (here):

    Void, since computer science was one of my undergraduate degrees, I’m gonna do you a solid and answer your question. Genetic information is stored, retrieved, processed and translated by the cell just like a computer would treat digital information.

    Consider the following digital code:
    01001000011000010010000001001000
    01100001001000000100110101100001
    01100100011001010010000001111001
    01101111011101010010000001101100
    01101111011011110110101100100001

    If you take the time to translate the above binary code, you will have used the same intelligently designed information processing that cells have used for billions of years to turn a sequence of A,C,T & G into a protein. Here are the parallels and you’ll see that they are certainly not dissimilar.

    hard drive = DNA
    bits = nucleotides
    bytes = codons
    ASCII table = codon table
    letters = amino acids
    words = proteins

    One of these information processing systems was intelligently designed and the other one we use every day. Which one is which? It should be easy to see why some infer ID from biochemistry.

    This post talks about the old ID canard that DNA is like a computer or DNA is like computer code or DNA is like data on a hard drive. Whatever. Let’s dispense with this right now.

    First of all, argument by analogy always fails. Analogies are a teaching tool. They are for describing a difficult concept to someone who has no experience with that concept. By relating that concept to something that they already understand, then they can begin to see how that concept works.

    To a 5th grade student, I would make the analogy that DNA is like a blueprint. It tells the cells how to make proteins. I would never use that “DNA is like a blueprint” analogy in a discussion with anyone who had the least idea about what DNA actually is. That’s like (pun intended) going up to an automotive engineer and saying that cars are like horses and buggies, and that’s why your latest model is crap.

    It’s arguable that DNA is digital information. True, DNA is made of nucleotides (which are the important bit) attached to a common sugar backbone. Those four nucleotides could be described digitally using two bits[1]. Because of binary notation (like decimal notation, except with two possible values instead of ten), two bits can represent 4 things. In this case, the four DNA nucleotides: A, T, C, and G.

    So, if you see someone using this argument and describing DNA using 4 bits, then you are free to call them out on having zero knowledge of DNA or biochemistry. Here’s why.

    Yes, 4 bits covers the 4 nucleotides for DNA. But you need another bit to get include RNA, which, as we all know can act as a enzyme to change DNA. So, that’s pretty important.

    But DNA is a hella lot more detailed than that. We need to another bit to cover methylation. Basically, the molecule for the nucleotide gets changed and has a methyl group attached to it. That can have a variety of effects, including stopping other things from happening. So we have to consider that. In fact, there are over 100 known chemical changes that can occur to various nucleotides. Each one being either present or not, so we need 100 bits (minimum) to deal with those.

    The real trick in dealing with DNA as digital information is that DNA pieces aren’t taken in isolation. The entire gargantuan molecule that is a DNA chromatid interacts with itself and with other DNA strands. It folds around a histone molecule (most of the time) and certain portions of DNA are more likely to be in certain locations on those histone molecules. That can have an effect on how the DNA is copied, translated, and mutated. So, our digital model has to account for that. Oh, and there are multiple types of histone too.

    Then even the histones can have molecular attachments (at least ten or so) and that has an effect on the DNA as well. For example, acetylation (which is like methylation, but with an acetyl group) of certain histones can change the transcriptional competence. So, we have to consider that.

    Then there’s all kinds of other effects that can’t be taken in isolation. There are alleles that cause mutation in other alleles (and I just learned that today, how freaking awesome!). So, taken by itself, the nucleotides of an allele may be relatively simple to model digitally, but when combined, two stretches of DNA can have marked effects on each other.

    Then of course, we need to talk about the relative effects of various mutational effects on pieces of DNA. Some areas of DNA are much more prone to mutation than others. This is affected by everything I’ve mentioned about and much, much more.

    Any model of DNA that attempts to talk about the whole of DNA, but doesn’t include even a couple of these affects is just not going to work. Honestly, I wish that ID proponents would step up. Information technology is a growing field in biochemistry. It’s being used by several scientists to explore DNA. Not by ID proponents, which is odd, but other scientists.

    At this point, we’re into hundreds of bits just to describe a single nucleotide and possibly trillions or way more to describe their interactions. Honestly, I’m not even sure how to approach it. I’m not a programmer. I don’t even know if it could be hard coded like that. So much of the interaction depends on so many other things. A cell is not an isolated thing, even a single-celled organism has inputs and outputs into the environment.

    As far as I know. We can’t even model protein folding very well and DNA is orders of magnitude more complex.

    But the reason that I said DNA is arguably digital is that it responds to analog inputs. The amount of a hormone in the blood stream determines the DNA response. This is what causes everything from limbs to mouths to form. We have a head end and a butt end. Not because the head end is a digital 1 and the butt end is a digital 0, but because there is a gradient of hormone levels with is higher at the head end and slowly reduces until we get to the butt end.

    I’m going to borrow liberally from Doc Bill’s (most of what follows is an edited version of that) response on this subject too.

    Genetic information is in no way stored, retrieved, processed or translated by the cell like a computer would treat digital information. There’s no CPU, for example. The CPU stores basic operating instructions that tells what it can do, logically speaking. Things like AND, XOR, and other logic and calculation functions are inherit in the CPU. Not so much with the cell, which must create the things that work on the things to make the things. Similarly, there is no “data” vs. “instructions” in the cell. It can all be both.

    Cells and DNA don’t work in discrete steps (further removing the digital aspect). At any one time in a cell, hundreds (if not thousands) of alleles may be being read, copied, repaired, or changed. Dozens of mRNA strands are produced and being read simultaneously. Not the pseudo-simultaneous of a computer, which works so fast, we can’t perceive the steps, but actually at the same time (also so fast that we can’t perceive the steps). At two base pairs per second, it would take you 95 years to copy your DNA. Every cell in your body can do it in 8 hours.

    It gets worse. Computers run off of code. If someone argues that DNA is that computer code, then they don’t understand how DNA works.

    Make a change in a computer code and the whole thing likely crashes. Make a change in a DNA code and you might make it run better. The change probably won’t have any effect at all (what with non-coding regions and the resilient nature of our protein construction system). DNA can repair it’s code (sometimes). DNA can have code from completely different systems (viruses) inserted and will be perfectly fine, unless the virus kills the organism, but the DNA will work until the rest of the cell runs out of fuel.

    DNA can be massively rearranged and it can have no effect on the system. Chromosomes can combine (as they did in our ancestors after the chimpanzee line split off)  with no ill effects. Genes can be moved to different places on other chromosomes with no ill effects (as long as the whole thing got moved).

    Thanks Doc.

    I’ve been trying (via several iterations) to describe what a computer that acted like a cell would be like and I just can’t. It makes no sense in computer language. This is my best attempt.

    There is a hard drive. There’s no CPU. There’s no files. No images, no apps, no data, no executables. There’s just bits on the drive. Now some of those bits can cause the computer to do something. But they only react to certain inputs. If the computer gets too hot, then some bits of the drive will be read by other bits of the drive and produce bits that do other things to other bits of the drive. This isn’t random, because every bit is affected (or not) by it’s position on the drive, what other bits are around it and how the drive is built (occasionally it changes shape). Sometimes, entire chunks of bits are moved around for no apparent reason. Sometimes, the drive copies itself into another drive (which it makes the new drive itself).

    See how weird this is?!!

    So, while information technology tools can be useful in dealing with DNA… DNA is not like a computer.

    _____________________________________
    [1] A bit is the basic unit of information in computing and digital communications. A bit can have only one of two values, usually “on” or “off” or “1” or “0”.

     

    Category: CreationismScienceSkepticism

    Tags:

    Article by: Smilodon's Retreat