'Summoning the demon.' 'The new tools of our oppression.' 'Children playing with a bomb.' These are just a few ways the world's top researchers and industry leaders have described the threat that artificial intelligence poses to mankind. Will AI enhance our lives or completely upend them?
There’s no way around it — artificial intelligence is changing human civilization, from how we work to how we travel to how we enforce laws.
As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car’s Autopilot feature. In Arizona, a self-driving Uber vehicle hit and killed a pedestrian (though there was a driver behind the wheel).
GET the enterprise AI TRENDS report
Download the free report to learn about the biggest emerging trends in AI and strategies to watch for 2021.
Other instances have been more insidious. For example, when IBM’s Watson was tasked with helping physicians diagnose cancer patients, it gave numerous “unsafe and incorrect treatment recommendations.”
Some of the world’s top researchers and industry leaders believe these issues are just the tip of the iceberg. What if AI advances to the point where its creators can no longer control it? How might that redefine humanity’s place in the world?
Below, 52 experts weigh in on the threat that AI poses to the future of humanity, and what we can do to ensure that AI is an aid to the human race rather than a destructive force.
GET the enterprise AI TRENDS report
Download the free report to learn about the biggest emerging trends in AI and strategies to watch for 2021.
Table of contents
- Unpredictable behavior
- Political instability and warfare
- Ethical and societal impacts
- Surpassing human intelligence
- Reshaping the workforce
Unpredictable behavior
1. Stephen Hawking
AI technology could be impossible to control
Stephen Hawking with President Obama.
The late Stephen Hawking, world-renowned astrophysicist and author of A Brief History of Time, believed that artificial intelligence would be impossible to control in the long term, and could quickly surpass humanity if given an opportunity:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
2. Elon Musk
Regulation will be essential
Few technologists have been as outspoken about the perils of AI as the prolific founder of Tesla Inc, Elon Musk.
Though his tweets about AI often take an alarmist tone, Musk’s warnings are as plausible as they are sensational:
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and are entrusted with mission-critical responsibilities:
“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.”
Musk has compared the destructive potential of AI networks to the risks of global nuclear conflict posed by North Korea:
“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
He has also pointed out that AI doesn’t necessarily have to be malevolent to threaten humanity’s future. To Musk, the cold, immutable efficiency of machine logic is as dangerous as any evil science-fiction construct:
“AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
3. Tim Urban
We cannot regulate technology that we cannot predict
Tim Urban, blogger and creator of Wait But Why, believes the real danger of AI and ASI is the fact that it is inherently unknowable. According to Urban, there’s simply no way we can predict the behavior of AI:
“And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.”
4. Oren Etzioni
Deep learning programs lack common sense
Considerable problems of bias and neutrality aside, one of the most significant challenges facing AI researchers is how to give neural networks the kind of decision-making and rationalization skills we learn as children.
Oren Etzioni speaking at the Office of Naval Research.
According to Dr. Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, common sense is even less common in AI systems than it is in most human beings — a drawback that could create additional difficulties with future AI networks:
“A huge problem on the horizon is endowing AI programs with common sense. Even little kids have it, but no deep learning program does.”
5. Nick Bilton
AI will have unintended consequences
Nick Bilton. (Om Malik)
Other experts fear the unintended results of AIs being given increasingly mission-critical tasks. Author and magazine journalist Nick Bilton worries that AI’s ruthless machine logic may inadvertently devise deadly “solutions” to genuinely urgent social problems:
“But the upheavals [of AI] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
6. Nick Bostrom
We aren’t ready for the challenges posed by AI
Nick Bostrom at the Future of Humanity Institute. (Future of Humanity Institute)
Academic researcher and writer Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, shares Stephen Hawking’s belief that AI could rapidly outpace humanity’s ability to control it:
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”
Political instability and warfare
7. Vladimir Putin
AI will have a profound impact on global politics
Vladimir Putin in January 2017. (Kremlin)
World leaders need little convincing of AI’s unprecedented capacity to reshape the geopolitical landscape. Russian President Vladimir Putin, for example, firmly believes that mastery of AI technology will have a profound impact on global political power:
“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
8. Jayshree Pandya
Autonomous weapons systems could disrupt political stability
Few applications of AI are as potentially dangerous as autonomous weapons systems. As DARPA and other defense agencies around the world explore how AI could shape the landscape of modern warfare, some experts are deeply concerned by the prospect of relinquishing control over devastating weaponry to neural networks.
Jayshree Pandya, founder and CEO of Risk Group LLC, is an expert in disruptive technologies, and she has warned of how AI-controlled weapons systems could pose an existential threat to world peace:
“Technological development has become a rat race. In the competition to lead the emerging technology race and the futuristic warfare battleground, artificial intelligence (AI) is rapidly becoming the center of global power play. As seen across many nations, the development in autonomous weapons systems (AWS) is progressing rapidly, and this increase in the weaponization of artificial intelligence seems to have become a highly destabilizing development. It brings complex security challenges for not only each nation’s decision makers but also for the future of the humanity.”
9. Bonnie Docherty
Killing machines that lack morality
Some view the competition among software developers to create increasingly sophisticated AI as a contest eerily reminiscent of the Cold War era nuclear arms race.
Bonnie Docherty, associate director of Armed Conflict and Civilian Protection at the International Human Rights Clinic at Harvard Law School, believes that we must stop the development of weaponized AI before it’s too late:
“If this type of technology is not stopped now, it will lead to an arms race. If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”
10. Max Erik Tegmark
Information warfare will be an even greater threat
Technological advancements such as autonomous vehicles represent a paradigm shift in human society. According to Max Erik Tegmark, physicist and professor at the Massachusetts Institute of Technology, they also represent weaknesses that rogue actors will be able to exploit in future wars:
“The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.”
11. Gideon Rosenblatt
‘AI nationalism’ could fuel political conflict
For all the idealism of machine learning entrepreneurs, it is virtually impossible to separate the scientific from the political when it comes to potential applications of AI technology.
Writer Gideon Rosenblatt believes that robust, forward-thinking policies must be enacted in conjunction with developments in AI to ensure that the governments of the world are adequately prepared for the vast disruption that AI promises:
“AI nationalism, for the US and China, seems to be paying off in the short term. But it seems irresponsible to assume there’ll be no consequences to developing cutting-edge AI without policies and development guidelines specific to that technology.”
12. Jon Wolfsthal
AI oversight must be an informed, national debate
Some experts are concerned that the Pentagon and other national defense bodies around the world are too focused on developing autonomous weapons systems and not focused enough on regulating them.
Jon Wolfsthal, nonresident fellow at the Project on Managing the Atom at Harvard University and former senior director at the National Security Council for Arms Control and Nonproliferation, believes that more must be done to address the urgent need for regulatory oversight of disruptive weapon technologies:
“We may not be able to stop lethally armed systems with artificial intelligence from coming online. Maybe we should not even try. But we have to be more thoughtful as we enter this landscape. The risks are incredibly high, and it is hard to imagine an issue more worthy of informed, national debate than this.”
13. Ian Hogarth
AI will encourage regional protectionism
Some researchers fear that increased adoption of AI will exacerbate today’s polarized political climate. Machine-learning engineer Ian Hogarth believes that artificial intelligence will invariably result in the rise of “AI nationalism”:
“Continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent.”
Ethical and societal impacts
14. Tim Cook
AI must respect human values
One aspect of AI that is discussed far less frequently than its potential for destruction is whether AI can be taught to respect human ethics.
A car from Apple’s self-driving car program, Project Titan.
Apple CEO Tim Cook has long been an outspoken advocate for user privacy. He argues that creating AI systems that can interpret and value ethical approaches to society’s problems is a serious responsibility to future generations that companies like Apple must reckon with:
“Advancing AI by collecting huge personal profiles is laziness, not efficiency. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility. In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”
15. Olga Russakovsky
Diversity is essential to solving difficult problems with AI
The under-representation of women in computer science and information technology is an ongoing concern for business leaders, technology companies, and academia. Author and machine vision expert Olga Russakovsky says greater diversity in the AI field is essential if the technology is to solve society’s most difficult problems:
“We are bringing the same kind of people over and over into the field. And I think that’s actually going to harm us very seriously down the line…diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.”
16. Theresa May
Making AI work for everybody
Theresa May (Raul Mee)
British Prime Minister Theresa May has long been an outspoken advocate of AI technology. She acknowledges the inherent risks in the technology’s advancement, and emphasizes that properly channeling its power is crucial for humanity:
“British-based companies…are pioneering the use of data science and Artificial Intelligence to protect companies from money laundering, fraud, cyber-crime and terrorism. In all these ways, harnessing the power of technology is not just in all our interests — but fundamental to the advance of humanity…Right across the long sweep of history — from the invention of electricity to the advent of factory production — time and again initially disquieting innovations have delivered previously unthinkable advances and we have found the way to make those changes work for all our people. Now we must find the way to do so again.”
17. Kenneth Stanley
AI could harm already vulnerable people
Some technologists worry that AI will be used to hurt and oppress people. Kenneth Stanley, senior engineering manager and staff scientist at Uber AI Labs, is one such individual.
In Stanley’s view, the potential for AI could represent a grave danger to the most vulnerable members of society, a problem that requires a holistic approach to technological oversight:
“I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.”
18. Tabitha Goldstaub
More bias, more misogyny, fewer opportunities
Tabitha Goldstaub, co-founder of AI market intelligence platform CognitionX, explains that failing to account for gender bias as AI technology advances could be catastrophic for women’s rights:
“We’re ending up coding into our society even more bias, and more misogyny and less opportunity for women. We could get transported back to the dark ages, pre-women’s lib, if we don’t get this right.”
The dangers of unequal gender representation in AI isn’t solely an ideological problem:
“Men and women have different symptoms when having a heart attack — imagine if you trained an AI to only recognize male symptoms. You’d have half the population dying from heart attacks unnecessarily.”
19. Brian Green
The ethical considerations of AI are of paramount importance
According to Brian Green, director of technology ethics at Santa Clara University, AI is the most important technological advancement since mankind harnessed the power of fire in the Stone Age:
“There are a lot of people suddenly interested in A.I. ethics because they realize they’re playing with fire. And this is the biggest thing since fire.”
20. Tess Posner
AI is anything but perfect
Tess Posner, CEO of nonprofit advocacy group AI4ALL, is keenly aware of AI’s limitations, especially when it comes to perpetuating existing societal biases:
“A lot of people assume that artificial intelligence…is just correct and it has no errors. But we know that that’s not true, because there’s been a lot of research lately on these examples of being incorrect and biased in ways that amplify or reflect our existing societal biases.”
21. Andrew Ng
Ethics in AI is about more than ‘good’ or ‘evil’
Andrew Ng, co-founder of Google Brain and former chief scientist of Baidu, believes questions about the ethics of AI are much bigger than individual use cases:
“Of the things that worry me about AI, job displacement is really high up. We need to make sure that wealth we create [through AI] is distributed in a fair and equitable way. Ethics to me isn’t about making sure your robot doesn’t turn evil. It’s about really thinking through, what is the society we’re building? And making sure that it’s a fair and transparent and equitable one.”
22. Sundar Pichai
Tech companies must develop AI responsibly
As one of the world’s largest and most influential technology companies, Google is in a unique position to advocate for the use of AI technology in everyday life.
Google Home, one of the company’s first forays into in-home AI.
The company has been using AI and neural networks for several years, but CEO Sundar Pichai believes that increasingly sophisticated AI tech must be used responsibly:
“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”
23. Satya Nadella
Overcoming human bias in AI is vital
Satya Nadella at LeWeb. (Heisenberg Media)
With AI technology poised to revolutionize virtually every industry and vertical, it’s vital that major tech companies approach the development of AI technology responsibly.
Microsoft CEO Satya Nadella sees AI and machine learning transforming every aspect of modern life:
“Digital technology, pervasively, is getting embedded in every place: every thing, every person, every walk of life is being fundamentally shaped by digital technology—it is happening in our homes, our work, our places of entertainment. It’s amazing to think of a world as a computer. I think that’s the right metaphor for us as we go forward.”
Like Pichai and other leading technology executives, Nadella has warned of the risk of human biases being built into AI technology, which demands a deliberate, conscientious approach when developing AI applications:
“Technology developments just don’t happen; they happen because of us as humans making design choices—and those design choices need to be grounded in principles and ethics, and that’s the best way to ensure a future we all want.”
Nadella explains that part of the problem is that human language — the building blocks of machine-learning systems and AI networks — is inherently biased. Unless researchers consciously account for such biases, “neutral” technology becomes deeply flawed:
“One of the fundamental challenges of AI, especially around language understanding, is that the models that pick up language learn from the corpus of human data. Unfortunately the corpus of human data is full of biases, so you need to invest in tooling that allows you to de-bias when you model language.”
24. Joanna Bryson
AI may perpetuate negative biases
Joanna Bryson, an AI researcher at the University of Bath in England, reiterated the danger of unconscious bias affecting AI in a piece published by The Guardian:
“People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.”
25. David Robinson
Biased data means biased decisions
Many technologists have spoken out about the potential abuses of vulnerable people at the hands of AI-driven systems, particularly in the context of the criminal justice system.
David Robinson, managing director and founder of the think tank Upturn, has studied AI’s potential impact on everything from predictive policing to bail reform. He says that AI systems supplied with flawed data will inevitably perpetuate many of the injustices already felt across marginalized communities:
“The basic problem is those forecasts are only as good as the data they are based on. People in heavily policed communities have a tendency to get in trouble. These systems are apt to continue those patterns by relying on that biased data.”
26. Melinda Gates
Men are not the only ‘real technologists’
Melinda Gates (DFID)
Despite historical racial and gender disparities in the technology sector, more women and and people of color are developing the technologies of tomorrow than ever before. Although progress has been made in recent years to rectify racial and gender disparities, philanthropist Melinda Gates of the Bill & Melinda Gates Foundation believes that complacency could undermine much of this work and exacerbate existing problems:
“If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”
27. Geoffrey Hinton
AI could exacerbate social injustice
World-renowned computer scientist and “Godfather of Deep Learning” Geoffrey Hinton has been an outspoken skeptic of the applications of AI for many years.
Hinton, middle, in 2016. (Steve Jurvetson)
Echoing the warnings of Joanna Bryson and David Robinson, Hinton has spoken of the potential for AI technology to exacerbate systemic inequality, which he believes is a direct result of the flawed nature of many social systems:
“If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out to be a good thing depends entirely on the social system, and doesn’t depend at all on the technology. People are looking at the technology as if the technological advances are a problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology. . . . I hope the rewards will outweigh the downsides, but I don’t know whether they will, and that’s an issue of social systems, not with the technology.”
28. Fei-Fei Li
AI is ‘the responsibility of mankind as a whole’
Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute and the Stanford Vision and Learning Lab, stresses the urgent need for diversifying the AI field:
“As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity and we’re missing a whole generation of diverse technologists and leaders.”
Li believes that the moral and ethical responsibility for developing AI systems must be shared across the private industry as well as government policy and academic research:
“We all have a responsibility to make sure everyone — including companies, governments and researchers — develop AI with diversity in mind…Technology could benefit or hurt people, so the usage of tech is the responsibility of humanity as a whole, not just the discoverer. I am a person before I’m an AI technologist.”
29. Rana el Kaliouby
AI must be emotionally and socially intelligent
Rana el Kaliouby (Joi Ito)
Rana el Kaliouby is the co-founder and CEO of Affectiva, which develops emotion recognition technology. El Kaliouby believes that social and emotional intelligence have not been prioritized enough in the AI field, which could be detrimental to society:
“The field of AI has traditionally been focused on computational intelligence, not on social or emotional intelligence. Yet being deficient in emotional intelligence (EQ) can be a great disadvantage in society.”
30. Daniela Rus
AI is not inherently ‘good’ or ‘bad’
AI may pose unprecedented risks, but Daniela Rus, roboticist and director of MIT’s Computer Science and Artificial Intelligence Laboratory, explains that AI itself is morally neutral:
“Critics often cite job displacement as a reason to discourage further AI research. But history is rife with innovations that have been disruptive: does anyone look back and regret Eli Whitney inventing the cotton gin or James Watt developing the steam engine? Like any technology, AI isn’t inherently good or bad. As my MIT colleague Max Tegmark likes to say, ‘The question is not whether you are ‘for’ or ‘against’ AI — that’s like asking our ancestors if they were for or against fire.’”
31. Martin Chorzempa
Pervasive intelligent surveillance has a social impact
AI technology will likely have a profound impact on law enforcement. Numerous police departments in the United States are already relying on automated facial recognition tech and predictive policing methods using algorithms.
A demonstration of facial recognition technology. (Wikimedia)
But according to Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, D.C., the mere threat of autonomous surveillance is an effective means of regulating the public’s behavior. This has significant implications for the social order over the coming decades, particularly in heavily surveilled nations such as China:
“The whole point is that people don’t know if they’re being monitored, and that uncertainty makes people more obedient.”
Surpassing human intelligence
32. Ray Kurzweil
The distinction between AI and humanity is already blurring
Ray Kurzweil at the PopTech conference. (JD Lasica)
Not all technologists see AI as a harbinger of doom. Futurist and author Ray Kurzweil views AI primarily as a tool for humans to expand their intelligence.
Kurzweil’s work focuses on what he calls “the singularity” — the point at which artificial superintelligence (ASI) will surpass the human brain and let people live forever. He says the merging of man and machine is inevitable:
“We’re merging with these non-biological technologies. We’re already on that path. I mean, this little mobile phone I’m carrying on my belt is not yet inside my physical body, but that’s an arbitrary distinction. It is part of who I am—not necessarily the phone itself, but the connection to the cloud and all the resources I can access there.”
33. Yann LeCun
Overstating the dangers of AI’s capabilities
To Yann LeCun, chief artificial intelligence scientist at Facebook AI Research, the biggest problem with AI isn’t its potentially nefarious applications, but rather a profound misunderstanding of the technology itself:
“We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.”
34. Clive Sinclair
Humanity may not survive AI
The world of computing has advanced tremendously since British technologist Sir Clive Sinclair created the Sinclair ZX80, the first mass-market home computer to be sold in Britain in 1980.
Even then, Sinclair recognized the potential of computers to surpass human intelligence, claiming that computers would herald the end of “the long monopoly” of carbon-based life forms on Earth.
Sinclair believes AI’s rise to dominance is inevitable, but not in the immediate future:
“Once you start to make machines that are rivaling and surpassing humans with intelligence it’s going to be very difficult for us to survive…But it’s not imminent and I can’t go round worrying about it.”
35. Karl Frederick Rauscher
AI may not act in humanity’s best interests
AI technology has advanced rapidly in recent years, and Karl Frederick Rauscher, managing director and CEO of the Global Information Infrastructure Commission (GIIC), fears that our dominance over machines will be short-lived:
“AI can compete with our brains and robots can compete with our bodies, and in many cases, can beat us handily already. And the more time that passes, the better these emerging technologies will become, while our own capabilities are expected to remain more or less the same.”
Rauscher has also speculated about potentially sinister applications of AI and how much power companies that wield it may be able to exert over the general public:
“Concerns regarding how powerful companies may choose to design new technologies are justified, given that their primary interest is to maximize profits for their shareholders. Many of them thrive on not-so-transparent business models that collect and then leverage data associated with users. Tomorrow’s big tech companies will leverage intelligence (via AI) and control (via robots) associated with the lives of their users. In such a world, third-party entities may know more about us than we know about ourselves. Decisions will be made on our behalf and increasingly without our awareness, and those decisions won’t necessarily be in our best interests.”
36. Claude Shannon
Welcoming our robotic overlords
Claude Shannon with his electromechanical mouse, Theseus. (DobriZheglov)
The late American mathematician Claude Shannon is known as the “father of information theory,” having published a landmark paper on the topic in 1948. Shannon’s take on the fading era of mankind’s dominance and the inevitable rise of the machines was both cynical and darkly comical:
“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”
37. James Barrat
AI will inevitably rule the world
Perhaps by virtue of their role as chroniclers and storytellers, it often falls to authors to warn us of the potential dangers of exciting new technologies. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, fears that mankind is doomed to a life of servitude in light of AI’s vastly superior intellect:
“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.”
38. Heather Roff
AI could be used to manipulate consumers
The increasingly personalized assistive technologies promised by AI have the potential to make everything from shopping to voting a more intimate, engaged experience.
However, Heather Roff, a nonresident fellow in the Foreign Policy program at the Brookings Institution, believes these technologies could be easily manipulated to control how people shop, think, and live their lives:
“[Algorithms] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag. It could be very dangerous.”
39. Gray Scott
The rise of AI is inevitable
Futurist and “techno-philosopher” Gray Scott has little problem conceiving of a world in which AI has risen to dominance over its former masters. To Scott, the question of AI’s ascension is a matter of when, not if:
“Once AI become self-aware, the cognitive hierarchy will be transformed forever where we humans are no longer the dominant species.”
40. Neil Jacobstein
Human stupidity is what makes AI threatening
Baidu’s Andrew Ng with Neil Jacobstein, middle. (Steve Jurvetson)
Dozens of experts have voiced concerns about the possibility of AI inheriting our flaws and biases, but few have said so as succinctly as Neil Jacobstein, chair of the artificial intelligence and robotics track at Singularity University:
“It’s not artificial intelligence I’m worried about, it’s human stupidity.”
41. Neil deGrasse Tyson
Appeasing our future masters
Neil deGrasse Tyson (right) with Bill Nye and Barack Obama (Wikimedia)
Astrophysicist Neil deGrasse Tyson is never one to shy away from controversial opinions, particularly on social media. When it comes to AI, however, Tyson isn’t taking any chances:
“Time to behave, so when Artificial Intelligence becomes our overlord, we’ve reduced the reasons for it to exterminate us all.”
42. Louis Del Monte
AI will surpass the sum of human intelligence
Science fiction authors have long been fascinated with artificial intelligence. Louis Del Monte, physicist and author of The Artificial Intelligence Revolution, believes that AI will become so intelligent in the coming decades that humans won’t even be able to fully grasp its power:
“Between 2040 and 2045, we will have developed a machine or machines that are not only equivalent to a human mind, but more intelligent than the entire human race combined.”
43. Anthony Levandowski
AI will force humanity to concede its dominance
A driverless car from the Waymo automated car program, which Levandowski worked on. (Wikimedia)
Waymo autonomous vehicle engineer and entrepreneur Anthony Levandowski created Way of the Future, the first church of artificial intelligence. He believes that with the interconnected systems of cell phones, sensors, and data centers around the world, AI will ultimately become omniscient and omnipresent, like a deity:
“What is going to be created will effectively be a god. It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it.”
Levandowski thinks that a fundamental shift in power will occur, and that the best we can hope for is a peaceful transition:
“In the future, if something is much, much smarter, there’s going to be a transition as to who is actually in charge. What we want is the peaceful, serene transition of control of the planet from humans to whatever. And to ensure that the ‘whatever’ knows who helped it get along.”
Reshaping the workforce
44. Steve Wozniak
AI could replace ‘slow’ humans altogether
Like many of Silicon Valley’s earliest pioneers, Apple co-founder Steve “Woz” Wozniak has expressed cautious optimism about the disruptive potential of AI. But in Wozniak’s view, AI also represents a profound danger to the future of mankind, and may ultimately replace human beings altogether:
“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”
45. Kai-Fu Lee
AI will fundamentally reshape the employment market
One common theme in discussions about the potential of AI is the considerable impact it will have on the global employment market.
Chinese venture capitalist and an AI expert Kai-Fu Lee highlighted how AI could affect the workforce of tomorrow in an interview with 60 Minutes:
“AI will increasingly replace repetitive jobs. Not just for blue-collar work but a lot of white-collar work. Basically chauffeurs, truck drivers anyone who does driving for a living their jobs will be disrupted more in the 15- to 20-year time frame and many jobs that seem a little bit complex, chef, waiter, a lot of things will become automated, we’ll have automated stores, automated restaurants, and all together in 15 years, that’s going to displace about 40 percent of the jobs in the world.”
46. Brian Chesky
AI automation offers benefits to some, risks for others
Brian Chesky at Airbnb headquarters. (Kevin Krejci)
Some executives believe that no job will be safe from the efficiencies promised by a tireless robotic workforce. Brian Chesky, co-founder and CEO of Airbnb, has voiced concern about the negative impact that robotic automation will have on the lives of working people:
“I’m concerned about the concept of automation. Many jobs will be automated; a lot will be. This will have benefits for people but it also has a huge cost. I worry that ‘Made in America’ will become ‘Made by robots in America.’”
47. Reed Hastings
Developing entertainment for AI
The world of entertainment has been fascinated with the notion of intelligent computers for more than 30 years. However, while many people see AI as an exciting new frontier in home entertainment, Netflix co-founder and CEO Reed Hastings has a somewhat less optimistic outlook on AI’s future role in how we spend our leisure time. He has even going as far as speculating whether AI will become part of Netflix’s audience over the coming decades:
“Over twenty to fifty years, you get into some serious debate over humans. I don’t know if you can really talk about entertaining at that point. I’m not sure if in twenty to fifty years we are going to be entertaining you, or entertaining AIs.”
48. Gary Marcus
A world without human workers
Scientist and author Gary Marcus speculates that the efficiencies promised by AI will not only supplant manual workers in industries such as manufacturing, but ultimately even creative professionals:
“But a century from now, nobody will much care about how long it took, only what happened next. It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.”
49. Sam Altman
AI offers short-term gains, long-term risks
Y Combinator president and co-chairman of OpenAI Sam Altman thinks that AI does represent a grave threat to humanity’s future — but will present plenty of exciting investment opportunities in the immediate future:
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
50. Sir Tim Berners-Lee
AI cannot be trusted to act fairly
Sir Tim Berners-Lee at Guildhall. (Wikimedia)
Some experts, including creator of the World Wide Web Sir Tim Berners-Lee, worry that the wide-scale adoption of AI in the financial sector could have disastrous consequences that would be nearly impossible to mitigate:
“So when AI starts to make decisions such as who gets a mortgage, that’s a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies. So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?”
51. Barbara J. Grosz
AI should work alongside a human workforce, not replace it
Barbara Grosz (Bengt Oberger)
While many AI researchers are excited by the possible applications of AI in fields such as healthcare and education, not everyone agrees that replacing human professionals with AI constructs is a good idea.
Barbara J. Grosz, the Higgins Professor of Natural Sciences at Harvard University and the first woman to serve as president of the Association for the Advancement of Artificial Intelligence, believes that allowing AI to completely replace human beings in specialized occupations would be a grave error:
“With regard to health care and education, I think there’s a huge ethical question for society at large. We could build those systems to complement and work with physicians and teachers, or we could try to save money by having them replace people. It would be a terrible mistake to replace people.”
52. James Vincent
Unseen algorithms are already reshaping society
To some experts, the most urgent AI-related issue is how widely the technology is being used in education, healthcare, and the criminal justice system in ways that we may not necessarily understand.
Technology writer James Vincent believes that while we must “future-proof” AI from becoming too powerful, society’s growing reliance on algorithms that we only vaguely understand is just as problematic:
If you aren’t already a client, sign up for a free trial to learn more about our platform.“If a computer can do one-third of your job, what happens next? Do you get trained to take on new tasks, or does your boss fire you, or some of your colleagues? What if you just get a pay cut instead? Do you have the money to retrain, or will you be forced to take the hit in living standards? It’s easy to see that finding answers to these questions is incredibly challenging. And it mirrors the difficulties we have understanding other complex threats from artificial intelligence. For example, while we don’t need to worry about super-intelligent AI running amok any time soon, we do need to think about how machine learning algorithms used today in healthcare, education, and criminal justice, are making biased judgements.”