Those of us who build robots used to be jocular when friends and family, looking for the insider’s perspective, asked whether robots were going to take over. “No worries!” I’d say, “When they chase you, just head to shag carpeting and close the door.”
That glib response was buttressed by scientific truths: while robots might best us at repetitive factory tasks requiring precision and strength, no robot has been able to approach our ability to perform a range and variety of homestyle physical work—like standing, walking, grasping, tossing, and jumping.
Just as important, when surprised, we humans adjust our actions quickly to achieve our goal, or we change our goal. For humanoid robots, even if the goal is a simple one, like “move to the other room behind that closed door,” they fail. The best ones have been slow, prone to falls, and toddler-proud when they manage to negotiate a few stairs. Until now.
I was blown away by the most recent model of Atlas, the humanoid robot created by engineers at Boston Dynamics. Last November, the company made videos available of Atlas doing jumps, flips, and sticking landings with the grace of a gymnast. “The world’s most dynamic humanoid,” they claim on their website. I agree (and, full disclosure: I am not on their payroll).
You might say: “Ahah! Now you must be worried!” But I’m not, still, even though my joke about shag carpeting and doors no longer works. My comfort with these athletically impressive machines stems from at least two features of human psychology that I reckon drive our fear.
First, when we worry about a robot revolution, we are often worried about our own personal loss of control and purpose in a rapidly changing world of work. You see this in the media as the “Robots Are Taking Our Jobs!” headline. Second, any sci-fi take-over scenario—be it Skynet, Cylons, or Westworld’s Hosts—requires the magic of emergent consciousness, a Deus ex Machina plot device in reverse, creating instead of resolving conflict. Poof! Consciousness!
But there’s no science behind that fiction. Yet. As it stands, we don’t have a comprehensive theory of consciousness in lifeforms, let alone what is necessary and sufficient for consciousness in a machine. What we do know is that in organisms consciousness didn’t emerge spontaneously: even if you hold that only humans are conscious, then you have to own that we evolved, and that building consciousness took time. Millions of years of time. I’m not saying that consciousness is unknowable or beyond science. But where we stand today is that the scientific record, as I read it, holds no clear pathway for humans to design and make a machine that will have—or be capable of having emerge—human-style consciousness. Knowing that situation may change, for the moment I worry not about bots.