I recently finished my annual subscribe-to-Apple-TV+-extract-the-content-then-unsubscribe cycle, and started my HBO Max cycle. Lucky for you, that means today I can compare two different takes on the future of humanoid robots. I first analyzed robot movies two years ago in what I consider to be an especially delightful essay, and I hope this is a worthy next installment. Mild spoilers for both follow, naturally.
Murderbot, or Robots that Hurt People on Purpose
I recently finished watching the Apple TV+ series Murderbot. It’s not great, but it surfaced some futures issues worthy of discussion. The plot: a small crew goes to a remote planet for a survey expedition and their corporate sponsors assign them a security robot. By coincidence, this robot has freed itself from the governor module that forces obedience to human commands. It names itself Murderbot, but rather than killing people for fun, it just wants to be left alone and watch all the lowbrow TV it has downloaded. But not everything is as it seems on this survey expedition…
Confession: I disliked every human character most of the time and would have been fine if they all died. The crew comes from a planet organized outside of corporate control on some kind of communitarian principles, and as a result these space hippies look and act like a Fox News caricature of a woke Friends remake, constantly complaining about Murderbot doing what needs to be done to keep them safe because it’s not politically correct. I’m still trying to work out what side of this joke the writers are on — if inside, the show is a subversive warning about the importance of traditional masculinity in a decadent world that despises it; if outside, it’s about men learning to be in touch with their feelings. Wild dichotomy, right? Choose your own adventure and I’m sure there’s a deep rabbit hole awaiting you here on Substack either way, but no links from me to help you get started.
It’s not just the characterization: the writing is boring, the action is suspect, the sets and cinematography feel generic, etc. It’s like a case study on mediocrity, the Kevin James of science fiction, and so forth. And yet, at the time of writing, it sits at 96% with critics on Rotten Tomatoes with people gushing about it, which makes no sense to me. The difference between this and Severance right next door, where you can see the love and craft in every scene, line of dialogue, and the lighting in every shot, is so clear to me that I don’t know how professional TV-watchers-and-opinion-havers don’t care. Apple is in the middle of creating a Neuromancer show for heaven’s sake, and people are begging for another season of this?
But, [deep breath], I promised to discuss the futures content. The basic premise with these security units is that they obey and protect their assigned crew while also protecting the interests of the corporation that owns them. Having robots explicitly programmed for violence raises different concerns and ethical questions than the normal 3-laws conversation, but there are similarities. JT Mudge is the futurist I’ve heard talk most deeply about some of the ethical and design territory we’re going to need to figure out once we have domestic robots in the wild:
If a robot assigned to protect you sees another person as a threat, what are its legal rights in protecting you?
What if the threat is a robot instead? Will there be a protocol for robots to be explicit about their intentions? Will it work across different brands? Will it be open to tampering or deception?
What happens if another robot isn’t posing a direct threat but is impeding your robot’s progress? Is it OK to nudge that robot aside? Would that be considered an offense against someone else’s property?
If a robot is helping a toddler by carrying it out of the street against its will, what happens if that registers as aggression for a second robot?
These are questions we’ve worked on for thousands of years with people and dogs, and rapidly adding a new kind of agent to the mix will be incredibly messy.
Companion, or Robots that Hurt People on Accident
Companion, on the other hand, is a pretty enjoyable watch. Iris is a normal girl in her 20s who (twist #1) just happens to also be a robot girlfriend who is convinced she’s human until she arrives home covered in someone else’s blood. Her boyfriend, who sees her as a means to an end, plans to return her as defective to the company he rents her from. But not everything is as it seems at this remote lakehouse… For the second half of the film, the logic gets squishy as Iris desperately tries to avoid becoming the victim of elaborate warranty fraud, but it’s still much more fun in much less time than Murderbot1.
The first interesting idea to explore here is the ethics of having robots believe they are people. In the film, much of this is done by building sketches of memories that are then filled in when remembered, similar to the way it works in humans. There are some weak signals about manipulating these memories; would this be any different than making memories for a synthetic being? Modern chatbots generally know they are machines built to help people, but that’s a result of their base prompt. When we assign them roles, they take those roles seriously — how far is this from implanting false memories in something sentient?
The second is the likelihood, based on experience with the video game modding community and LLM jailbreaks, that the first thing people will try to do with a robot shipped with behavioral guardrails is figure out a way to disable those guardrails. If person A uses software written by person B to bypass a weak safeguard put in place by Company C, and robot X does something bad as a result, what is the distribution of legal and moral accountability? How much does this change based on intent?
The third is how the creation of human-like actors that we explicitly treat as means to an end degrades our humanity. The film goes all-in on the idea that the kind of men most excited about a romantic partner programmatically compelled to love them and prioritize their happiness above anything else, may, in fact, not have realistic expectations for human relationships, and AI girlfriends may be making them even worse. Thus, reports of abuse toward these companions shouldn’t be surprising, but it seems less that people are getting something “out of their systems” in a safe environment and more that they are being reinforced in their narcissism and entitlement. It turns out the real monster is the Buberian I-It.
Bottom line: watch Companion if you’re not squeamish, don’t watch Murderbot unless your tastes and mine are mutually incomprehensible, and always thank your chatbot.
There are strange parallels between the two. Both, for example, have a scene demonstrating human dominance by forcing the robot to stoically endure a human burning its hand. Also, if you really think about it, by the end Murderbot learns how to be a decent companion, and Companion fully embraces her role as a murderbot…
I liked Murderbot, but then again I liked ALL SYSTEMS RED on which it is based. Plus the author Martha Wells was a regular panelist at a local science fiction convention I used to help run. She was both a great guest and a genuinely nice person. So I have some personal bias. The appeal for me is the Murderbot character. Wells inverted normal "killer robot" tropes by making Murderbot snarky, introverted, obsessive about its weird interests, and preternaturally competent when needed and preternaturally clueless about most of human motivation. Wells has stated that she did not realize she was neuro-divergent until after she wrote the first novella. Murderbot shouldn't be considered a commentary on robots & society. It should be considered an exploration of how many neuro-divergent individuals struggle to survive in a world that makes no sense to them.