When I went to see the new Mission: Impossible movie in theaters with my teenage son1, I didn’t expect it would have anything to do with my futures practice. I didn’t know anything going in - but what’s to know? Stuff will blow up and Tom Cruise will save cinema from COVID and Disney2. I was wrong! As is revealed before the opening credits, the newest threat to global stability3 is a rogue AI manipulating digital systems. The film rubbed up against two futures issues I want to discuss, one an emerging issue that I’ve been paying attention to (that I think is handled pretty well in the film) and one regarding a common misperception about predicting the future (where the film stumbled).
The main ability of this AI is to infiltrate data networks and corrupt them by creating fake data on the fly (including audio, video, etc). In some ways, the rise of generative AI means that this is a problem we’re already dealing with. In other places, I’ve referred to this as “trust collapse”, because it makes it impossible to use media to establish truth without knowing its provenance - this will soon complicate prosecutions, and it’s already messing with news media. The idea of this happening autonomously is only slightly more frightening than the fact that it’s already starting to be done by thousands of people with no other purpose than to spread chaos for fun.
The issue that I had was with the AI’s uncanny ability, as a consequence of its analytical prowess, to predict the course of future events - if X happens, Y object will be on Z train the following day, etc. This assumption that a powerful enough computer can brute-force usable probabilities for the future is reasonable for well-understood, well-measured systems with enormous amounts of data4, but inappropriate for anything else, especially when individual human agency is considered. It is, however, a common trope in popular media, with the droids in Star Wars being the most obvious example; it also crept into Silo5, where I found it an annoying distraction from the overall excellent worldbuilding.
It’s a bit of a dangerous idea because it suggests that if we just think or measure hard enough, the course of the future will be obvious, which stands in stark contrast to the Foresight practice of helping people think through a wide array of possible futures. It’s unfortunate as well, because the same effect is possible without resorting to this trope: an AI system with access to data and resources across the world, and without an emotional attachment to prior plans, could be infinitely improvisational, constantly reconfiguring its plans to drive toward its objective as obstacles emerged, in a way that feels superhuman but grounded in experiences like navigation systems re-routing right after you miss a turn.
I’m confident that nothing I say will persuade anyone to see or not see the film, but I was genuinely surprised that the film touched so closely on two issues that have been on my mind lately and I wanted to share with my tiny corner of the internet.
We got in the mood by Tom Cruise racing from the car to the theater, something I don’t recommend doing in boots.
Plot twist on the second, as cinema was actually saved the following week by the oddest of odd couples.
For those keeping track, so far the threats in the series have been exposure of private information, an engineered virus and slow motion doves, an unspecified biohazard, disinformation and nuclear war, government employees without democratic oversight, and nuclear terrorism.
For example, despite constant complaining, weather forecasting is really quite good.
In episode 3, the head of IT can tell who would make the best sheriff because he “ran the numbers”.