Pages Menu
TwitterRssFacebook
Categories Menu

Posted by on Sep 20, 2013 in Evolutionary Psychology | 7 comments

Sci-fi gets computing wrong, evolution gets it right

You have to hand it to many science fiction creators for their predictions. Jules Verne foresaw rocket ships, solar sails, and trips to the moon; Aldous Huxley predicted genetic engineering and in vitro fertilization; Star Trek predicted cell phones, laptops, and even iPads. But almost everyone got the basic architecture of “future” information systems wrong, many long after they actually came into existence.

The plot of the original Star Wars films got it so wrong as to make little sense at all: missionary droids and smugglers are required just to send information. Apparently wide-scale communications are not readily available even in a civilization with droids, faster-than-light speed spacecraft and cities the size of planets. Both the original Star Trek and The Next Generation jarringly feature mainframe-type computer systems. Kirk or Picard sometimes have to go to some special terminal or room to access a computing function or database because no Enterprise has wi-fi or Wikipedia.

I need data on 5 different tablets because we only have 24th century technology.

“I need data on 5 different tablets because we only have 24th century technology.”

Distributed, ubiquitous  access to an enormous portion of human knowledge was not predicted by the writers [Granted though, we have and sometimes still use mainframe+thin client systems, but these are very limited in use and are now dwarfed by distributed systems in power, utility and ubiquity]. Even once it became obvious that connectivity would proliferate across the globe and penetrate dozens of tools and products that are not computers per se, sci-fi gets it wrong in the opposite direction. Films like Hackers, The Net, Terminator (1-3),  Die Hard 4, and Swordfish feature plots and plot elements in which any digital device apparently can readily access any other digital device in existence.

I'm hacking into their toaster. That toast will be so burned!

“I’m hacking into their toaster. That toast will be so burned!”

While a boon to hack writers the world over, this idea is just as wrong. Modern information systems tend to have connections only between entities that are needed to fulfill their function, and only in circumscribed ways. My microwave oven has a small computer that doesn’t communicate with any other digital device. My smart phone can utilize GPS signals, but only as a passive receiver, it can’t send any signal to it, let alone control it in any imaginable fashion.

In short, modern information systems offer ubiquitous access to remote systems in principle, but in fact are restricted to communicating only with the ones that serve a particular function, and communicating only in the mode conducive to that function. Until very recently (and oddly, often even after) sci-fi creators didn’t guess at this arrangement. But they might have, if they knew about computational theory of mind and the modular organization of the mind.

The comparison between technological and neuronal information systems should never be construed too literally, but there are striking architectural similarities. Functionally coherent bits of the mind “know” things produced by other bits. They may have uni- or bi-directional communication. Your visual cortex “tells” you that squares A and B are different colors, but your knowledge they are the same color can’t be “told” to your visual cortex. You continue to see them as different. That particular type of communication is uni-directional.

Grey_square_optical_illusion

You can’t directly gain access to the information about your heart rate, blood pressure, blood glucose, temperature and other data even though parts of your brain monitor them every second. Connections are plentiful between “systems” when function depends on it: you can construct imaginative counterfactuals using information from your senses, short and long term memory, auditory and language processors, and emotional state simultaneously. The newly conjectured ideas can then be written to memory and impact many systems in turn.

Artificial and neuronal information systems have many of the same features because the features are so powerful: cheap, high-bandwidth connections and well “designed” modular systems built around a small number of functions, frequently hierarchical.

The  “all powerful” mainframe and hyper-connectionist computing models have also been postulated as metaphors for how the mind is organized. The mainframe is essentially the religious soul concept (or if put more secularly, the homunculous). The hyperconnectionist model is akin to that of old-school behaviorism and more generally called (naive) connectionism. These models are as obsolete in psychology as they are in technology.

Further reading
Jerry Fodor and the modularity of mind, wiki
Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind by Robert Kuzban
Jules Verne, wiki

  • Sennacherib

    Since you brought computing technology to bolster your claims about how the mind works, I thought I’d mention this:

    http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001109

    A large part of EP’s emphasis on massive modularity drew from artificial intelligence (AI) research. While the great lesson from AI research of the 1970s was that domain specificity was critical to intelligent behaviour, the lesson of the new millennium is that intelligent agents (such as driverless robotic cars) require integration and decision-making across domains, regularly utilize general process tools such as Bayesian analysis, stochastic modelling, and optimization, and are responsive to a variety of environmental cues [73].

    Modern artificial intelligence and data mining is largely driven by domain-general learning methods given lots of data (cf. Peter Norvig’s talk The Unreasonable Effectiveness of Data) and I doubt it has benefited much from positing the existence of scads of mental modules that are not compatible with the rather coarse developmental mechanisms and high degree of anatomical uniformity of the human neocortex.

    Your posts about evolutionary psychology are usually bad but this is exceptional.

    • http://www.skepticink.com/incredulous Edward Clint

      I am not impressed by the PLOS paper and its rather naive interpretation of what modularity entails. Decision-making “across domains” in artificial systems is no criticism, unless you believe that modules are walled-off units that can never talk to each other or be subordinated to a larger function.

      The Norvig video is interesting, but only underlines any points I have made in this post. Systems that have to be hand-held and training with millions or billions of samples (often with attended feedback) is far from what we’d call intelligent. The best examples of Norvig’s involve applications like the language translator. Where does the information come from? A central database? Well, no, it comes from human minds that have already done the heavy cognitive lifting. And specifically, the data is from the web- a vast mass of servers and medium-size information clients (not mainframes or anemic thin clients) used by humans and in thanks to dynamic and function-specific applications and appliances for communication, collaboration, content creation, and much, much more.

      I don’t really fault applications developers like Google data miners for not employing top-level domain specific designs because we don’t have adequate hardware to the task, or deep understanding about how the mind accomplishes it. A typical neuron has 7,000 synapses, and neurons have at least dozens of kinds of communication (neurotransmitters). That’s an amazing piece of hardware packed into a few microns. That said, all modern human-produced software (to my knowledge) is object-oriented and modular in design. Anything “intelligent” that Google does is made possible by armies of function-specific modules with tightly defined inputs, outputs, and operations.

      “Your posts about evolutionary psychology are usually bad but this is exceptional.”

      You are free to criticize, or to forego reading any such posts.

  • Shatterface

    Star Wars was science fantasy – it no more got computing ‘wrong’ than Lord of the Rings was ‘wrong’ about dragons.

    You also seem to think that sf proper should be judged by its fidelity to the actual future of the author rather than as an expression of the author’s present: I don’t think any editor since Campbell has held that opinion.

    Imagine if an sf writer in the Seventies had correctly predicted that people in the 20th Century would be arrested for making jokes about terrorism on Twitter – it wou

    • Shatterface

      I also think it would have been irrelevent if an sf author had correctlly predicted you wouldn’t be able to edit a message on Skeptic Ink on a bloody iPhone

    • http://www.skepticink.com/incredulous Edward Clint

      They get it “wrong” to the degree they are trying to predict or concoct “advanced” human technology. As I mentioned, they predicted real technologies because this is what they aimed to do.

      I don’t fault any sci-fi franchise for its topical content, and I haven’t. They do reflect some of notions of where computing was “heading” of their time that were wrong. That’s what makes it interesting to talk about. Strictly speaking, I am not saying this makes them bad as creative works. I generally love sci-fi.

  • Shadeburst

    I don’t see any breach of the laws of physics in imagining a smart phone that can send a signal back to a GPS satellite. Your microwave contains an EPROM. I don’t see any breach of the laws of physics in imagining a device that can reprogram the EPROM remotely. The job of a SF writer is to imagine the impossible made possible, and provide a plausible mechanism for it. There’s no requirement that the mechanism comply with presently-known technology.

    • http://www.skepticink.com/incredulous Edward Clint

      I was not saying such things are not possible, you misread me. I was saying those devices are designed based on their functions, which are very limited (compared to the number of functions theoretically possible). I would go further, it’s not possible to create a useful technology ecosystem where devices are inter-operable with all other devices in existence.

      “The job of a SF writer is to imagine the impossible made possible, and provide a plausible mechanism for it. There’s no requirement that the mechanism comply with presently-known technology.”

      I agree. I never said any different.