AI is not going to replace humans, but humans with AI are going to replace humans without AI.
Karim Lakhani
The last 18 months have been transformational for me. A big part of that is because I started the UH Foresight Masters program, which has taught me new skills and ideas, connected me with an incredible group of colleagues, and led me to start this newsletter. However, that work overlaps and interweaves with a much more widespread phenomenon: the explosion of the capability of and interest in generative AI. This wave of interest1 has propelled me in my “day job” doing data strategy for healthcare: I’d been working for years with others to lay groundwork for the use of machine learning algorithms, ethical use of data, etc, and even pointed out the potential of GenAI in the GPT-2 era, but all those efforts dramatically ramped up in organizational attention and support after the release of ChatGPT. I’m working on carving out a small niche in the futures field centered on how to effectively use AI to do Foresight work2.
The Book
If you only follow one voice to understand what’s happening to work and society with AI, it should be Ethan Mollick. Ethan writes excellent content at One Useful Thing, but has also just published a book: Co-Intelligence: Living and Working with AI. The book is a quick 212 pages, and reading it helped me grasp some ideas that hadn’t quite connected from his newsletter or from my own experience with AI.
In essence, he argues that the latest generations of generative AI tools represent a new general-purpose technology, like electricity or the telephone, but, because its implementation is at the level of the individual rather than a massive utility, it has the potential to be adopted much faster than previous examples. For centuries, AI has often been a scam covering up the work of humans, from the Mechanical Turk to Amazon’s Just Walk Out (or is it?3), but this time definitely feels different. The trick is to calibrate the hype correctly: for example, the first AI-run business never really went anywhere, but people (me included) are getting real value from these tools and a bigger transformation seems inevitable. A lot has been written about how the abilities of these tools are “emergent”, being able to competently perform tasks they weren’t specifically trained for: writing poetry, programming, passing the bar exam, etc. I don’t really see most of these as good examples of emergence, since they are tasks completed by writing language, and similar tasks are well-represented in the training data, it seems natural that training a machine how to match what people “would” say in a given situation would create these kinds of abilities. None of this, however, diminishes the level of awe appropriate for this development: we have basically created intelligence on tap, one that’s better than maybe 80% of humans across an incredibly wide array of tasks, contexts, and situations.
The book lays out four heuristics for how we should think about and use these tools. Briefly, we should try using AI to assist in pretty much everything we do; stay alert and stay accountable for the output; treat these tools “as though” they are people, realizing we can specify exactly what role they should take i a given interaction; and realize that even if progress stopped today, there are big implications for society and individuals that haven’t yet unfolded. These are wildly practical, and much better advice for almost everyone than spending lots of time learning the latest expert prompt engineering tricks.
Five chapters explore both the capability and some of the practical considerations about different roles that AI can play in our lives. First, because it’s trained on the kind of things people write, the things it writes are very human-like. With a little bit of shifting in how you start the conversation, the kind of persona it takes on can change a lot, so this is how you have a single model powering an AI acting as a business advisor, a life coach, or a virtual girlfriend. AI is more creative than almost any person, in terms of the number of ideas generated per unit time and the quality of the best ideas, and people are using it everywhere to avoid starting from a blank page; humans seem to still be unmatched at recognizing the “spark” of a brilliant idea, so don’t turn your brain off. AI can take on or enhance many of the non-physical tasks that workers do every day: for example, getting a team of different personas to edit/critique a piece of writing can significantly polish what you have before having a human edit. A few jobs, like call center agents, may be totally replaced, but most of the time you want a human exercising judgment; paradoxically, in some situations a worse AI leads to better outcomes, because people are less likely to just blindly use what it outputs. AI is infinitely patient, so it can be used to patiently teach or coach, assessing where the learner is and giving the next important breadcrumb along a learning or development journey (I feel like this is the use case that is the furthest behind at present).
Mollick even does some futures work in the book, thinking about the potential implications of four different scenarios4! Depending on the speed of improvement of AI, we could be left just exploring the full implications of what’s already been created5, continue to adjust and retrain ourselves as things keep changing, shift to a more cyberpunk world of corporate vs hacker AI to dominate society, or enter of period of existential risk. The huge variation in the potential outcomes leads to a wide spread of proposals and responses from experts, from legally protecting the companies doing the training to preparing to bomb their datacenters.
The book has a few rough edges. For example, Mollick isn’t an expert on the underlying math/computer science and makes some errors, like assuming that transformers created the attention mechanism instead of just doubling down on them, and I would argue that Large Language Models are examples of self-supervised rather than unsupervised learning. I don’t really feel like he makes a strong case either way about whether offering intelligence as a utility is fundamentally different than power or communication as utilities, even though this is at the heart of the most extreme predictions about the course of the future. Last, I really disliked the game he played of letting you know after the fact that something in the book was written by AI - I understand that he wants to shock people into realizing that they can’t detect the difference6, but it strikes me as a lot like the gimmicky 2023 trend of people presenting poems and quotes at the beginning of presentations and then slyly telling everyone they were AI-generated. But I’m only pointing these out so you know I have enough discernment not to like everything, and you can trust me when I say this book is the best thing you can read to orient yourself to this changing reality.
Application
I want to make some additional connections between this book and futures. First of all, I talked last week about how the current state of GenAI would have seemed fantastical/magical just two years ago now seem routine as the conception of “normal” has stretched to accommodate it. This shift represents the entry of a long-anticipated piece of the future into our present reality, and the “nausea” associated with the current disorientation will be an ongoing challenge that futurists can help the public to navigate.
In addition, as Mollick makes clear, we are at the very early stages of unpacking what’s possible with the current technology, and how it can/should/might transform the way schools, counseling, health care, office work, advertising, parenting, entertainment, or scientific discovery are done. That is, there are still several generations of really significant implications that we can work to imagine. For example: if the use of AI can dramatically shrink the amount of time to learn new skills and create output, then the productivity/pay/prestige gap will widen quickly; this suggests that, for reasons of equity if nothing else, schools should focus heavily on giving students meaningful practice with these tools. However, if education shifts toward using AI to make up for our lack of expertise in various areas, that might make it harder in the future to find people with deep expertise that can judge the output and create the content for future training. But it might also mean a big increase in students showing interest in trades like plumbing or in kinetic art like dance or acting. This could lead to more pressure for public funding of theater etc.
Last, foresight work frequently involves quickly getting up to speed on a subject, building artifacts across a wide range of mediums, etc. I used AI last semester to quickly design, develop, create art for, and build out a game much more rapidly than would have been reasonable in the past. However, the same tools could be used to quickly churn out reasonable drafts of scenarios, spin up future artifacts to build a convincing view of a possible future, plan an immersive experience, etc. It’s much easier to be a generalist when you have a general-purpose tool that can quickly get you to a B+ level at anything, so this is quickly becoming indispensable in the profession. Share your best use cases / successes in the comments.
Sure, some of it is hype, no argument. But I don’t always get caught up in hype - blockchain and the Humane AI pin are two examples where I was pretty skeptical from the beginning.
For example, I presented on my work about using AI to accelerate futures game design/development at Houston’s Foresight Spring Gathering last week (more to come on this, I promise).
I think Amazon is probably telling the truth here - the media is frequently conflating the asynchronous work done by people to improve algorithms over time with the actual functioning of those algorithms.
Unfortunately, the scheme for these is just 4 different speeds of AI development, rather than something spicier like a 2x2 or the Houston archetypes.
And these are big: trust collapse, AI romance, and the transformation of knowledge work, including the collapse of writing as a costly signal / “proof-of-work”.
Thankfully, this is rare in the book and disappears in the second half.
Hi Tristan,
Here's an example of a use case I made of GenAI in L&D, which you said is an area that is furthest behind at present. You can upload a PDF to ChatGPT4o to practice interviewing skills and then get immediate and personalized feedback. Doing simple and familiar workflows (upload a PDF) with new and better results (having unlimited opportunities to practice and improve anytime/anywhere) is something that Adam Grant speaks about as a key to driving innovation adoption.
Check out the demo video and PDF here: https://www.linkedin.com/feed/update/urn:li:activity:7223327872432717826/