We fell short of our Fall Fundraiser goal of 2,500 donations. Help us catch up ⏩ Give Now
What “Blade Runner” got right — and wrong — about our 2019 tech
Dec 4, 2019

What “Blade Runner” got right — and wrong — about our 2019 tech

HTML EMBED:
COPY
We do ask ethical questions about bots. But guesses on 2019 tech were kinda off. Discuss!

The 1982 science fiction classic “Blade Runner” was set in November 2019, in Los Angeles. But the LA envisioned by director Ridley Scott is very different from the LA you’d recognize today.

For one thing, it’s raining all the time, and the movie’s dystopian hellscape is full of flying cars, pervasive technology and artificial humans — or replicants — almost indistinguishable from real humans. Also, almost everyone smokes. 

Aside from the obvious, how far off is the movie from present-day 2019? And what did the movie get right? I spoke with Amy Webb, founder and CEO of the Future Today Institute, about all of that. I started off asking her what “Blade Runner” got right. The following is an edited transcript of our conversation.

Amy Webb: I think some of the voice commands — being able to talk to computers — know that doesn’t seem like the most interesting or exciting piece of that movie, but I think that was a pretty big leap at the time. It would have been easy for people to imagine robots. People have always imagined robots. I think it would have been a much bigger leap to imagine an ambient interface.

Molly Wood: That’s such a good detail, because we have so much of that now that it almost doesn’t register. But you’re right, if you put yourself in 1982, you’re like, “Oh, OK.”

Webb: That’s exactly it, which is why flying cars and the colonies that have moved off planet — super interesting to think about, but also not that hard to imagine at that time. What would have been much harder to imagine would have been the kind of technology that’s currently invisible to us that we don’t even think about, we just use it. 

Wood: What do you think “Blade Runner” gets wrong? Where might it have missed the mark?

Webb: I think there are some obvious pieces, like how artificial intelligence would manifest. And I don’t blame anybody — for as long as we’ve had AI, we’ve been thinking about anthropomorphizing AI. It shouldn’t really come as a shock that at the time, there was an idea that humans would live alongside human-like AIs, the replicants, or that those AI would be bioengineered in ways that gave them superior cognition or superior physical strength and speed. I understand where those ideas came from, but obviously, they were incredibly wrong. AI is all around us, it just didn’t show up that way. AI is, in many ways, replicating human speech and behavior. I don’t think in ’82 we had enough computers and enough devices for people to envision a future where androids walked among us or replicants walked among us in different ways, but we have some of that now in the form of bots.

Wood: So you’re saying even though they’re not walking around, that we do have basically passable programs?

Webb: I think our current fake news problem would tell us that they’re pretty believable. The replicants in the movie, for the most part, walked and talked and look like humans. I guess what I would say is this is kind of like a replicant, just in a different container.

1982’s “Blade Runner” imagined 2019 Los Angeles as a rainy, tech-heavy dystopia where everyone smoked. (Photo courtesy of Warner Bros.)

Wood: I want to talk about the ethics at the core of the plot, because the germ of the whole plot is the idea that these replicants have rebelled against being used as slaves. We now have, in the physical world, factory robots and care robots and potentially pets, and in the digital world, all of these bots that do all kinds of bidding. I wonder, what you think about the ethical implications of our development of AI and robots that do our programmed bidding?

Webb: There’s been some studies done showing that robots that look very industrial, that don’t have any human-like characteristics, we don’t empathize with. If they’re doing repetitive tasks or even causing self-harm, it’s interesting for us to watch, but we don’t care. Once that robot looks and behaves more like another living being — in this case, I’m thinking about all of the Boston Dynamics robots that you’ve probably seen videos of or replicants. We relate to them in a different way. I think again, this sets us up on a dangerous path to the future, because we ought to ask ethical questions not just about the service robots in our lives, but also in the ways in which humans are being asked to act like robots. One of the interesting outcomes that we never saw in “Blade Runner” or “Space Odyssey” or the “Terminator” — the canon of sci-fi that deals with AI — was a future in which the humans are functioning as robots, and the machines are tasked with the cognitive work. That’s what we’re starting to see happen already. There are plenty of companies — Amazon is one of them — that rely on a synergy between humans and robots.

It turns out that it costs a lot of money to get robots to do a lot of the fine motor skills, which, quite frankly, they’re not terrific at right now. It turns out they’re much better at some of the cognitive thinking skills. It’s more efficient in terms of energy, but also cost for the machines to do all of the cognitive work and for the humans to do the robotic work. When we think about empathy and robots, we tend to go to these places where we’re asking if it’s OK for us to enslave machines or treat them in these ways. I don’t think we’ve asked the question: Is it OK to enslave humans in a different way? Is it possible that we’re — slave is a very loaded term — but is it possible that we’ve cognitively boxed in or enslaved human cognition?

Wood: It does seem like these questions of empathy for your fellow person or bot on the internet — the idea of what you can trust and what you can’t — those seem to have been very prescient thoughts, right?

Webb: Absolutely. And so do some of the other constant themes like control that reflect our current anxieties, I think, and certainly some of the anxieties of the time — computers making their way from corporations into people’s homes; communications devices, like very early cellphones that were finding their way into more and more people’s use. It was the very early days of the commercial internet. The entire communications landscape was starting to change pretty drastically. I think people were starting to wrestle with questions that had to do with control. Who controls the police force? Who controls health care? Who controls medical supplies? Who gets access? Who doesn’t get access? Who winds up with permissions? Who doesn’t wind up with permissions? These are pretty gnarly problems that we’re being asked in addition to things like is it possible that humans have empathy or that the replicants have empathy in ways that don’t necessarily complement each other? And what could that mean? I think these are challenging issues that we’ve been wrestling with for a long time.

Wood: All credit to our engineer Robyn, who right before this interview sent me the side-by-side photos I had seen on Twitter that Elon Musk said that the new Cybertruck that he just announced was going to be inspired by “Blade Runner.” Seeing the photos, I had not realized just how inspired — of all the things that could come true from “Blade Runner,” did you think the truck would physically manifest? 

Webb: It’s a good reminder that a lot of our current leaders in technology have been heavily influenced by sci-fi. Jeff Bezos is a massive Trekkie, and his days growing up watching “Star Trek” have clearly influenced the path that he’s currently on. Elon Musk has been clearly influenced by a lot of those ’80s and ’90s sci-fi movies. Those are good things to remember, especially to those current and aspiring filmmakers. Sometimes you don’t realize that really terrific storytelling and highly effective visuals along the lines of what [artistic director] Sidney [Mead] used to put together can have real world impact later on.

Elon Musk with his “Blade Runner”-inspired Cybertruck at the Tesla Design Center in Hawthorne, California, in November. (Frederic J. Brown/AFP via Getty Images)

Related links: More insight from Molly Wood

In the future, we will have an ongoing conversation about whether bots should pretend to be humans, whether they have to disclose that they’re actually bots and whether people can tell the difference between an online persona and a real person. But we’re still working on something like the Voight-Kampff test from “Blade Runner” that helps determine a human from a replicant. 

Apparently, though, a piece I found in Curiosity from last year said researchers from MIT have developed something called the Minimal Turing test. The test is supposed to be the threshold for a computer to trick someone into thinking it’s a human. The minimal version is more about figuring out what human judges think a robot would say, compared to what they think a human would say. The best word to use to convince human judges that you’re also a human is apparently “poop.” Didn’t see that one coming, did you?

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Thanks to our sponsors