Earlier this September, billionaire entrepreneur Elon Musk stirred up a huge Twitter-storm when he posted that global competition in artificial intelligence (AI) superiority could potentially lead to World War III. This tweet came after Russian President Vladimir Putin declared, “Whoever becomes the leader in [artificial intelligence] will rule the world,” to which Musk tweeted, “It begins…” (Independent, “Elon Musk Warns Battle For AI Supremacy Will Spark Third World War,” 09.06.2017).
Of course, Elon Musk is rather infamous for making grandiose predictions and promises that often fail to come true. In 2016, he announced that his company SpaceX will master space travel and colonize Mars as early as 2024, only to pull the plug less than a year later when he realized that traveling to Mars in 25 minutes isn’t exactly feasible (Wired, “SpaceX’s Mars Plans Hit a Pothole. Up Next: The Moon?,” 07.22.2017). However, Musk’s tweet about World War III has been one of many such warnings about the dangers of artificial intelligence, going so far as to reference the “Terminator” movies.
“AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” he stated at the 2017 National Governors Association in Rhode Island (NPR, “Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk,’” 07.17.2017).
But is the situation really that dire? The CEO of robotics and computing company Neurala Massimiliano Versace argues that these doomsday predictions surrounding AI are all largely unsubstantiated (CNBC, “Why this artificial intelligence expert says Elon Musk is ‘selling fear,’” 09.06.2017). In fact, his biggest complaint so far is that non-experts like Musk who have no clue about how AI actually works seem to be dominating the discussions. In contrast to Musk’s warnings, Versace says that it is much too early to start regulating AI and that doing so would hinder innovation.
Several other critics have also voiced their opinions addressing the robot apocalypse scenario that Musk seems to predict. CEO and co-founder of Google Larry Page made the case that AI is designed to make people’s lives easier so that they have more time to pursue their own interests (Vanity Fair, Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse,” 04.2017). Likewise, Facebook’s Mark Zuckerberg compared fears about AI to early fears about airplanes and encouraged people to “choose hope over fear.”
On the other hand, it’s not like Musk is the only dissenting voice in the room. Renowned theoretical physicist Stephen Hawking similarly expressed how artificial intelligence could spell the end of the human race, and Microsoft’s Bill Gates voiced his worries that AI might become a problem after it becomes intelligent enough to dominate the workforce (Huffington Post, “Should We Fear Artificial Intelligence? The Experts Can’t Seem to Agree,” 04.07.2015).
However, rather than a “Terminator”-style takeover, the bigger concern for me from a cultural perspective is the direction that AI might take the world in.
It’s undeniable that today’s society places a disproportionate amount of attention on science and technology over any other discipline. Given how dependent on machines we’ve become, it’s no surprise that so many people hold degrees in math-intensive STEM subjects such as computer science, robotics and electrical engineering, and that we place these individuals atop lofty pedestals. As a result, pursuing a degree in the humanities is widely seen as a high-risk gamble considering the increasingly bloodthirsty modern arena known as the job market. But the problem here is that the widespread implementation of AI will likely exacerbate this issue even further.
Last March, U.S. Treasury Secretary Steve Mnuchin brushed aside all concerns about AI and stated that “In terms of artificial intelligence taking over the jobs, I think we’re so far away from that that it’s not even on my radar screen.” Unfortunately, Mnuchin couldn’t be more wrong (Wired, “Hate To Break It to Steve Mnuchin, But AI’s Already Taking Jobs,” 03.24.2017). In reality, AI has already started to seep into the workforce.
Let’s list some examples. In San Francisco, Simbe Robotics’s Tally robot can navigate around human shoppers at the supermarket to make sure that everything is stocked, placed and priced properly (The Guardian, “Robots Will Destroy Our Jobs—and We’re Not Ready For It,” 01.11.2017). Meanwhile in Japan, Fukoku Mutual Life Insurance has already replaced 30 of its employees with an AI system that can analyze and interpret data better and much faster than a human can (Fast Company, “Bet You Didn’t See this Coming: 10 Jobs That Will Be Replaced By Robots,” 01.19.2017). Artificial intelligence is also replacing financial analysts in the business sector simply because it can predict market patterns faster.
Not only that, careers thought to be safe from the encroaching tech revolution—such as journalism and teaching—are now at risk as well. For instance, companies such as Narrative Science and Automated Insights have created AI bots that write countless business and sports articles for clients like Forbes and the Associated Press (The Guardian, “Actors, Teachers, Therapists—Think Your Job is Safe From Artificial Intelligence? Think Again,” 02.09.2017). The United States military also relies on a computer-generated virtual therapist to screen soldiers in Afghanistan for PTSD, and physical robots are being used in Japan and Korea to teach English. Even actors could be replaced by some kind of technological innovation like with Grand Moff Tarkin in “Rogue One: A Star Wars Story.” Given the efficient and cost-effective nature of AI, it won’t be long until these systems are used in practically every industry.
Of course, there are various reassuring arguments out there. A common response is that new jobs will naturally form once old jobs are filled. However, exactly what kind of job do you think will be in demand once more and more companies implement AI in their business? A really insightful article by Clive Thompson has a headline that states it best: “The Next Big Blue-Collar Job Is Coding” (Wired, 02.08.2017). Sure, jobs won’t completely disappear, but I predict that the tech industry will be the only area in dire need of employees.
Another common response is that a greater focus in STEM education will eventually solve everything. Jenny Dearborn, an executive at the software company SAP, argues that young people today have a responsibility to become more educated in technology. “If you want to do health care, major in tech with a healthcare focus. If you want to be a neuroscientist, major in tech with that focus,” she emphasized (CNBC, “Why Elon Musk Might Be Right About His Artificial Intelligence Warnings,” 08.25.2017).
However, that’s easier said than done. The United States already lags behind in STEM education compared to the rest of the world, and considering how our current Secretary of Education is a billionaire who has spent millions of dollars fighting against government regulations and crippling teachers’ unions by taking away their right to strike, I’m not feeling too optimistic (The Guardian, 01.11.2017). Plus, what if you’re simply not naturally inclined toward skills in STEM? What about people who just don’t enjoy it?
Obviously, the last thing I want to do is bash the STEM disciplines and discourage people from pursuing STEM careers. I truly believe that science and technology can inspire wonder and excitement for everyone. However, I worry that students who discover their passions in the humanities will likely end up squeezed to death under the STEM-oriented educational system even more than they do today. As a college student who once had plans of majoring in the humanities, I’d hate to imagine what job searching will be like in a future where AI made has made that notoriously grueling, overly competitive process even harder.