This semester I’ve mostly treated simple, complicated, complex, and chaotic systems as clear, demarcated categories. But what happens at their boundaries, and what does it take to absorb new elements into existing systems?
This past week, Dr Schultz shared with us material based on her presentation at the World Futures Studies Federation meeting in Paris last year. The conference theme was liminality, the importance of the thin boundaries that separate concepts and spaces, and her ideas weave that into futures and systems concepts in a couple of ways that I want to dig into.
Riding the Whirlwind
One direct application of liminality is suggested by the Cynefin diagram. The boundaries that exist between adjacent types of systems functions like a liminal space. The most promising is probably the boundary between complex and chaotic systems. There are lots of levels and types of systems where brief dips into chaos are used as a way to either solve problems that a system is struggling with or to deal with pressures building up within the system. For example, optimization algorithms are often given a random element to keep them from converging too soon and getting stuck on solutions that are better than their immediate neighbors but worse than a solution farther away. As a real-world example, ants searching for food have a basic pattern for covering ground without crossing their path but add a random element that increases the search efficiency. At a societal level, lots of cultures have festivals or special events like Carnival that inject chaos into the social order to release tension from inequity.
Even if a complex system isn’t intentionally creating controlled bits of chaos, it’s likely encountering turbulence regularly from the environment. Living systems have a few different options for responding/adapting to this turbulence: they can try to shore up the stability of the system in its existing state, they can try to incrementally change to become more resilient, or they can transform. In some ways this is like seeing the chaos as a nuisance, a fact of life, or an opportunity. In the case of transformation, the chaos is internalized into the system temporarily as it reforms (like the way a caterpillar’s body dissolves in the chrysalis1 as it turns into a butterfly).
Manufacturing “Normal”
I’ve written in the past about Postnormal Times in the context of the menagerie of surprise, the Three Tomorrows of ignorance. In discussing Station Eleven, I also once alluded to the postnormal idea of a “manufactured normalcy field” as an explanation of what makes pandemics so dangerous - they represent a change to the social fabric that occurs faster than we can adapt. A Manufactured Normalcy Field is such a rich idea, and my understanding of it has grown so much in the past week, that I want to share some of the different layers of the concept. If you want to go even deeper, I’d encourage you to read Venkatesh Rao’s 2012 article “Welcome to the Future Nauseous”, which serves as the origin of much of the concept.
First, there’s a psychological level at which we are all subject to our human brains constantly metabolizing novelty. This makes it seem like “the future” is something that is always coming up but has never really come. Generative AI is on the cusp of this, with capabilities that seemed impossible/magical just two years ago now being taken for granted and even seen as boring.
At the level of human decision-making, individuals try to modify their environment to maintain comfort and homeostasis. As an analogy, most people in wealthy countries spend most of their lives traveling from one climate-controlled box to another in a smaller climate-controlled box, carrying normalcy with them from place to place. A few souls intentionally expose themselves to discomfort with cold plunges etc, often as an identity marker. Rao dismisses out of hand2 that these people taking “future plunges” are living outside the field of normalcy - they just live in a region that’s slightly more stimulating by exposure to and consumption of popular fictional images of the future. Elon Musk is a great example of this, as he has spent most of his career making grand gestures toward the image of the future popularized in The Jetsons3: space travel and colonization4, humanoid robots, and flying cars; It’s possible that none of these projects will make significant progress this century, but they provide the cultural cachet to draw investments toward his real businesses: batteries5, servicing the ISS, etc.
At a social level, the manufactured normalcy field is a marketing technique. Rao focuses his argument the example of air travel. So much of the commercial flight experience is focused on creating an experience barely distinguishable from traveling on a faster train. Because of the psychological discomfort associated with novelty, selling a new product is often more successful if it’s just a new device that you interact with in an old way. For example, this Atlantic article about BlackPlanet and the early days of social media refers to the web of 1999 as a new kind of library, which makes much more sense than what it actually was at the time6.
The implication of all this for futures, if accepted, is stark: the future will almost certainly look like nothing we see on the fringes of today, because by the time it hits the mainstream it will be thoroughly digested and normalized. This suggests that scanning might not be a useful guide to the future. It’s also possible we hit a quantity of change that completely destroys the manufactured normalcy field after 500 years of stretching, and we’re thrust into a new Dark Age where nothing makes sense and we don’t have the metaphors to orient to our society as it changes. I’m holding a space in my brain for all this, but I’m not committed yet - file it under “BIG IF TRUE”. Let me know in the comments what evidence you have for or against this way of seeing the world.
Again, a liminal space where neither the old nor the new rules apply, very similar to the second horizon in Curry and Hodgson.
A little more forcefully than his argument supports, I think.
It’s true that some, like plugging computers into brains, is more an attempt to will Neuromancer into existence.
In this case, it’s an oddly self-referential image of the future.
I think this is probably Musk’s most significant contribution to building an actual future - making it possible to economically/practically decouple generation and use of electricity.
Even today, we use and stretch the metaphor of the “town square” to describe and argue about social media sites, rather than really confront what an algorithmic attention-maximizer really means.