• Software Smarter than its Creators

    A recent story at The Register seems to be playing up the fear aspects of modern programming.

    Some computer programs are “smarter” than the people who built them.  The article describes how Google’s deep learning software can recognize cats.  That doesn’t sound like much of a big deal, every 2-year-olds can recognize a cat.  But how do they do it?  How do humans distinguish between a cat and a dog?

    Go ahead, take a moment and try.  Pretend that you are talking with an alien (who can inexplicably speak your language perfectly) and try to describe cats such that the alien never makes a mistake between identifying a cat vs. any other species on the planet.

    It’s really freaking hard.  When I’m faced with this, I usually have to go back into phylogenetic trees.  But 2-year-olds don’t know about genetics or any of that.  They just know it’s a cat.

    Google’s software is doing the same thing.  It’s just that no one knows how it’s doing it.  Modern software is well beyond the stage that one person can remember everything about it.  But this goes beyond that.  This is a piece of learning software.

    I don’t know the details… it could be evolutionary, it could rewrite the network that drives it, or it could be just code.  But it’s not code that was written into the system.  And that’s the important bit.  This software learned how to recognize cats in videos.

    In fact, the software is so good, that it has a better success rate in recognizing some things than humans.

    What stunned Quoc V. Le is that the software has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.

    Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.”

    Many of Quoc’s pals had trouble identifying paper shredders when he showed them pictures of the machines, he said. The computer system has a greater success rate, and he isn’t quite sure how he could write program to do this.

    Things like this are the first step into developing a true AI.  An intelligence that exists because we made it, not because it evolved.

    This is interesting in a couple of ways.  The first being, at what point can we call some evolved, thinking computer program intelligent.  We seem to be having trouble defining intelligence in a meaningful way and we are intelligent (or so we think).

    The second is that as this technology evolves and is combined with other pieces of software (voice recognition, text identification, and the like) we can start to imagine that the computers really will begin to take a lot of the load off of us.  Instead of building a little Excel script to analyze a bunch of numbers and then make some specific changes, we can just describe what we need to do to our computer.

    Third, it shows something very interesting that intelligent design proponents will hate.

    AH HA! You think, these computer systems were intelligently designed.  And that’s true, they WERE.  Past tense.  Let’s look at that quote from the Google software engineer again.

    What stunned Quoc V. Le is that the software has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.

    Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.

    How did that software do what Mr. Le couldn’t figure out how to do?  It learned… it probably evolved.  It changed over time in a way that we can’t even understand.

    This is a critical blow to the idea of intelligent design.  The ID advocates keep using humans as their example.  Humans design complex things, therefore an intelligence is needed to design complex things.

    But, and this is critical, humans didn’t design this software.  It, in effect, designed itself.  How did it learn?  It evolved, it changed over time.  It became, by itself, without any human input, more complex than we can understand.

    Category: ScienceTechnology

    Tags:

    Article by: Smilodon's Retreat