Pages Menu
Categories Menu

Posted by on May 6, 2013 in Culture, Government, Life, Research, Society, Technology | 0 comments

Complicated Things Make Complicated Problems

My granddad, in the lat 80s, bought a 1967 Mustang convertible.  He drove it until he died in 2004.  When asked why he didn’t want a car with decent air conditioning*, power brakes, or some real power (it had the 289 engine), he just said, “More stuff to go wrong.”

In a way, it was oddly prophetic.  The more things we have, the more often they go wrong.  Consider the lowly door.  It really is a stunning piece of technology when you think about it.  But it’s also amazingly simple.  A piece of wood or metal, two or three hinges, and a knob, usually with a spring and a lock.  And these things last for a long time.  I don’t know, but I’d hazard that more people replace their doors due to aesthetics rather than failure of the door.

A recent perspective article in Nature highlights this issue of complexity and how we are creating disasters ourselves.  The article is “Globally networked risks and how to respond” by Dirk Helbing.

Let me give you some examples of highly complex systems and how we think we control them, but we really don’t.

Have you ever been traveling on the highway and suddenly, there’s a traffic jam.  Three lanes of cars, barely moving.  Once you start to speed back up, you realize that you didn’t see an accident or anything that would cause the jam.  Maybe, you even thing, “they must have cleared the accident up”.

You think that there must be a cause to the traffic jam, right?  But there probably wasn’t.  These ‘phantom jams’ are common place.  Some MIT research has developed a model that describes how these jams form.  Even just one driver slamming on the brakes can cause a massive traffic jam.

It seems counter-intuitive that just one person or event can completely jam up a modern highway.  But it happens… a lot.  That’s one reason I’m all for networked driver-less cars, but that will bring it’s own issues.

Another example is the modern financial crisis.  Everyone was doing the job they were supposed to.  All the models say that economics systems tend toward an equilibrium.  Everything is fine.  We’re in complete control.  But the models were/are wrong.  Things are not fine.  And no one is in control.  One bank makes a call on loans that another bank can’t pay.  All of a sudden, there is a chain reaction of events that can devastate the system, whether that system is a small town or the entire planet.

In our networked world, we think we understand all the issues.  But we don’t.  We can’t even imagine all the possible causes and the possible effects.  Look at the shitstorm that some cartoons caused.  Even 15 years ago, few people would have heard about it.  Now, everyone gets tweets and blog updates on their phones.  “Going viral” isn’t just a meme, it’s a system behavior of the interconnected society that we’ve become.

The author claims, and I’m not sure I can disagree, that there isn’t a division between the cyber-world and our own.  Not like the Matrix.  But the effects of things on the internet have real-world results.  Internet images and videos have caused real-world riots and deaths.  Internet sharing has come to the aid of people all over the world.  And caused untold misery to others.

These kinds of things are behaviors of the system.  And we can’t control them.  We can’t predict them.  We barely realize that they are happening.

These chain reactions occur in all kinds of systems other than the ones already mentioned.  From the behavior of crowds (not the people, but the crowd itself as a system), to the surge in electricity during football halftimes.  These are exceedingly complex systems and we are continually surprised when the systems act differently than we expect or fail.

How many of you have ever been driving late at night and come to a stop-light.  There’s not another car for 2 miles in any direction, but you are sitting at that stop-light.  Stop lights, in general, are still controlled from the top-down.  Some engineer somewhere decides how long a particular light stays green.  They’ll probably make adjustments for time of day and traffic flows.

More and more traffic lights are moving to local control though.  And there’s all kinds of sensors, with all kinds of back-ups and some imposed systems so that (hopefully) the most traffic hits a long string of green lights.  [Wikipedia article on traffic light coordination and control.]

Then we have to start talking about social aspects of systems.  The people who are involved in the system may not have the same ideas about how to do things.  They may have mutually exclusive goals.

You may get a critical mass of one group and end up with a peaceful demonstration (We are the 99%) or a revolution (Arab Spring).  You can end up with theocracies, democracies, and dictatorships.  It can be (and has been) argued that the assassination of Archduke Ferdinand started World War I.

We know that some systems are just not well behaved.

Real-life systems, in contrast, are characterized by heterogeneous components, irregular interaction networks, nonlinear  interactions, probabilistic behaviours, interdependent decisions, and networks of networks. These differences can change the resulting system behaviour fundamentally and dramatically and in unpredictable ways.

The rules might change in a system.  The organization of the uprisings in Arab Spring using cell phones and social networking to track members, events, share videos, and the like was a huge change in how ‘revolutions’ were organized.

Finally, there is uncertainty in highly complex systems.  The internet is a network of networks.  We can’t ever know when a server or a communication line will go down.  In other systems, we may have a fundamental lack of knowledge of about how the system works (for example, the global financial system).  We just can’t know everything about a complex system, especially if some parts of the system are acting in a hidden or contrary manner.

So how do we deal with this complexity?

One way is to use the system’s structure as an helper instead of fighting it.  The introduction of a universal ‘smart-grid’ for electrical distribution and control and the a fore-mentioned local control of traffic lights can help.  This reduces micro-management, which is (even if it worked, which it generally doesn’t) inefficient and just can’t handle the size of these massively interconnected systems.

If the traffic signals could talk to each other (“Hey, I just had 40 cars go through, can you turn green so they can all go through.”), it may be possible to allow the system to monitor and control itself rather than trying to impose control on fundamentally chaotic systems.  I should insert a joke about ‘going with the flow’ here, but I got nothing.

Another thing is to develop network systems that are designed to be resilient and prevent chain reactions that can destabilize the entire system.  The article talks about a back-up system that operates in a different way so that the same issue doesn’t disable the primary and the backup system.

Another thing is to incorporate some kind of fail-safe system that can monitor itself in real-time.  The point would be to (somehow) shutdown small portions of the system in order to prevent cascading damage to the entire system.

These two plans could have helped prevent the 2003 Northeastern blackout. From Wikipedia

The blackout’s primary cause was a software bug in the alarm system at a control room of the FirstEnergy Corporation in Ohio. Operators were unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage. What would have been a manageable local blackout cascaded into widespread distress on the electric grid.

The last point that the author makes is that the unknown lies ahead.  There are a large number of systems that we just don’t understand.  What the recent financial crisis has shown us is that that all of our theoretical knowledge really is wrong.  We really don’t understand how a lot of these systems work.

We don’t know what the point of vulnerabilities are.  We don’t know if they are robust enough to survive an actual attack (instead of just a malfunction).

In theory, there’s no difference between theory and practice.  In practice, there is.





*Yes, the Mustang had AC.  It rarely worked and it sucked when it did.