I’m spending a few months analyzing the ideas in Jim Dator’s new book Living Make-Belief, along with related works. The introduction to this project can be found here. All entries are listed here.
The 8th chapter of Living Make-Belief is about the current AI moment, but also the buildup over the last several decades. His thesis is AI is more than a key driver of the Dream Society: these creations are humanity’s new children, a new communication technology, and will largely dictate how the dream society will unfold.
History
Dator frames our relationship with AI by looking at two foundational texts that have bracketed the narrative. He starts with Jack Good’s work in 1963 (published 1965) which observed 6 decades ago that building a machine smarter than humans would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”1. This clearly lays out both the seed of the idea of the technological Singularity and the fears of AI uprising that have dominated so much of society since, from Kurzweil to this month’s splashy AI 2027 scenarios2. Good gets lots of things right, from the need for stochastic methods for training to language as a key medium of communication, and makes the argument (common in some quarters today) that AI needs to be embodied to effectively learn from the world. Not bad for six decades ago!
Next, Dator moves to Bill Joy’s 2000 article in Wired, “Why the Future Doesn’t Need Us”. This is an early example of the doomerism that has become popular (often stoked by leaders of AI companies) and calls for safety measures to be taken. Dator sees no appetite or capacity for complex global governance3; he reframes the problem as an insistence that humans remain the species we are today, and the dominant form of life on the planet in perpetuity.
Instead, he recommends we take a long view of the way we as humans are constantly modifying our bodies and minds (from tattoos to exercise to food to natural processes like puberty). However, we’ve gotten to the point where the changes to ourselves and our environments are faster and more severe than our ability to manage. Dator suggests that maybe only beings significantly beyond human intelligence can get us out of the messes we immediately got ourselves in as soon as we developed the power4.
Possibilities
Dator’s point is that we should consider these new, strange entities to be the children of humanity. He also argues that our current ideas of where AI might take us are wildly non-useful. For one thing, we're still in the early stages of the rollout, scaling, and spread of this technology, so the full picture isn't yet in focus. Second, major shifts in technology cause changes in our values, so judging by today's standards will soon be outdated. Third, all fictional stories about ascendant AI haunting the public consciousness were created for attention/popularity/money, and not as useful or intelligent expressions of plausible futures.
Dator’s own vision for our AI-enabled future, which he warns us to be skeptical about, is one where automation has rendered work unnecessary and the economy has reached a state of “full unemployment”, and people can share in abundance and do whatever they find meaningful5. This is very similar to the post-work future explored by Andy Hines in his new book, Imagining After Capitalism6, as well as the idea of Fully Automated Luxury Communism more broadly.
Moving to a world where needs are meet and we can pursue self-actualization does tidily connect economic development, the evolution of values toward Postmodernism, Maslow’s hierarchy of needs, and the “end of history” quite nicely, but interestingly the connection with Dator’s ideas of a Dream Society based on icons and performance, or Rolf Jensen’s supremacy on images and stories, isn’t quite as direct7. Presumably some people will feel enough ambition to want to excel, and in a world without scarcity it may be that creating ideas/stories/myth is the only way to do that. I see how that still qualifies as a Dream Society, but the difference from today is at least as striking as moving from water-wheel-powered looms to modern assembly lines, or from punchcards to the internet. AI means machines move from scaling and selling stories to creating them; the final effect may be to transform our very dreams.
Coda: Pantheon
I've been watching the second season of Pantheon8 over the last several weeks (not quite done yet, don’t spoil it). The first season focuses on humans having their mind uploaded to silicon and the implications/consequences, but the second season adds the existence of their digital offspring, which are both fully synthetic and deeply human. These beings have human inclinations and biases, but without any embodied experience to draw on.
If we accept these AI persons we’re creating as our species’ strange and increasingly capable children, it begs the question of what we want to teach them, understanding that one day they may take our place as custodians of this planet (or maybe more). What would the universe be missing if we didn’t pass it along? I think about the things that make my own life feel meaningful: love, wonder, reverence, curiosity, and feeling like part of something greater. What would you add?
The $300-billion valuation of OpenAI, despite not having a clear path to profitability and planning on spending exponentially increasing sums on training compute, demonstrates the public sense that this could be civilization-altering technology.
Both scenarios assume that this AI stuff is actually able to surpass human intelligence, and will meaningfully contribute to AI research (this is where the acceleration comes from). In one, people pump the brakes before the machines get too good at lying to us about their intentions, and in the other they don’t. The former has flying cars, UBI, and space colonization, and the latter has humans extinct before the end of the decade. The message isn’t subtle.
For example, all the summits the planet can offer have failed to do much of anything on climate change and are slowly failing on nuclear non-proliferation. At least we’ll have the ozone layer to keep us company.
In fact, to make his point he even quotes from the “machines of loving grace” poem that served as the name for Anthropic CEO Dario Amodei’s essay exploring the best case possibilities for a future ruled by kind AI guardians.
Again, Dator see this as a return to the Edenic state of our hunter-gatherer ancestors, a view of history I'm deeply skeptical of.
I will read this but haven’t yet, so I’m basing this on my understanding of the book from presentations Andy has done.
Jensen’s book, being a guide for businesses and marketing, doesn’t have much to say about a world of abundance.
Season 1 discussion here if you haven’t seen it.
AGI mastery promises to rewrite our biological destiny—one pressured dive at a time.
https://open.substack.com/pub/veejaytsunamix/p/oceanic-alchemy-bezos-claudia-and