Last week was the beginning of the second half of Systems Thinking. No more causal loop diagrams, and this course won’t cover quantitative topics like simulation, so it’s all concepts and descriptions from here. So far this semester, I’ve given examples of systems describing pretty simple stories: a few moving parts, clear goals, defined interactions, etc. Last week I talked about how life is significantly more complex than non-living nature, in ways that quickly violate this simplicity. The language of Complex Adaptive Systems gives a way to describe this intuition and the common elements of these living systems.
Living and cultural systems are made up of huge numbers of interconnected balancing and reinforcing loops that lead to their emergent behavior and their ability both to be stable and to change. As the amount of complexity in a system increases, the system goes from stable/inert/dead (think a box of rocks) to periodic (like the Earth orbiting the sun or the emissions of pulsars) to chaotic (weather systems, fluid dynamics, TikTok trend emergence). As summarized in Stephen Lansing’s great introduction to the topic, there’s a narrow band between periodic and chaotic systems where complexity can emerge - Christopher Langton called it the “edge of chaos”1, and provides the following conceptual diagram showing the relationship between these based on their propensity to move to active states (represented by the parameter λ):

That is to say, if you feel like your life is precariously teetering on a tiny sliver of ground between absolute tedium on one side and a descent into cosmic chaos on the other: congratulations, you’re doing it right!2
Characteristics of Complex Adaptive Systems
There’s a consensus view in the literature about the key attributes of Complex Adaptive Systems:
They are self-stabilizing, finding ways to maintain their physical/chemical/social/emotional/financial state. People who are cold will automatically start shivering, but they might also start moving around, turn up the thermostat, or put on a sweater.
They are purposeful, pursuing explicit and implicit goals, and they use information as feedback to modify their behavior in that pursuit. This is basically a description of intelligence, and it can operate at multiple levels: the system as a whole might be directed by an authority with explicit goals, but it’s also possible that the system goals are the byproduct of the action of individual agents toward their own goals; for example, a sports league where every team is working to win every game leads to adaptive strategies at the team level, but also creates a system-level goal of more exciting games, which the league can support via salary caps, trading regimes, etc.
They modify their environment to adapt it to their purpose. Societies looking for places to live have drained wetlands, states looking to make travel more reliable have graded and paved roads, and companies trying to extract coal have blown up mountains. Ants digging an anthill and beavers building a dam show that this isn’t just an attribute of human systems.
In addition, they are capable of replicating, maintaining, repairing, and reorganizing themselves. Sexual and asexual reproduction are the most basic forms of replication, but expansion teams and kids moving out and starting their own families also qualify. Companies conduct annual training to maintain quality, and rehire positions when people leave. People read books and go to therapy to learn new ideas that can lead to better outcomes. Cities that experience flooding can build new infrastructure and assist people in rebuilding or relocating.
Lastly, these systems exhibit emergent behavior, where individual actions and the structure of interactions lead to novel and unexpected outcomes in the aggregate. A simple example is the way that a school of fish, each following pretty simple rules of movement, form what appears to be a single superorganism. The way that the rules of the US electoral system, built by people trying to avoid dividing the country up into factions, led almost immediately to a remarkably stable two-party system, is a more subtle example.
Artificial Complexity
Complex Adaptive Systems in the real world tend to be big and complicated: understanding the moving parts of even a single-celled organism is a specialized endeavor. However, in computer simulation, real-world obstacles like changing temperatures can be removed and the minimal requirements for complexity can be explored. The most famous of these attempts is probably Conway’s Game of Life. All that’s needed is a grid of squares that can be “on” or “off”, and two rules: in the next turn of the simulation, “on” cells turn off unless they have exactly two or three neighboring “on” cells, and “off” cells turn on if they have exactly three “on” neighbors. These rules can create stable cyclic patterns, but also pass information or continuously create new components. This can provide a good way to learn hands-on about how little it can take for complexity to emerge. The adaptation of living systems, however, requires lots of connected layers of complexity, in the same way that LLMs are able to produce surprising and novel content, but none of the experiments with AI agents using just LLMs as the thinking/planning layer have gone very far3.
It looks like Norman Packard may be the one responsible for actually coining the term in 1988.
From a quick Google search, it looks like nobody written this self-help book yet, but I think it could be a hit. Suggested title: Complex: How the Science of Everything Can Transform Your Life and Relationships. If you beat me to it, please just acknowledge me for launching your writing career with a throwaway joke.
I’m aware that some of the best minds of this generation are currently working on the agent problem. I don’t doubt that it will be solved, but I don’t think it will be by just scaling up the number of parameters, which is my main point: additional layers of controls will be needed to bring order to where it’s too chaotic, and more creativity to where it gets stuck.