HOUSE_OVERSIGHT_016835.jpg

2.32 MB

Extraction Summary

10
People
3
Organizations
1
Locations
1
Events
3
Relationships
4
Quotes

Document Information

Type: Page from a report or book on artificial intelligence safety
File Size: 2.32 MB
Summary

This document page discusses and rebuts common arguments against the risks posed by artificial intelligence, specifically addressing the notions that AI is not imminent, that critics are Luddites, and that intelligent machines will inherently have altruistic objectives. It cites figures like Nick Bostrom, Elon Musk, and Stephen Hawking, and references the "is-ought" problem and the "naturalistic fallacy" in the context of AI ethics.

Organizations (3)

Timeline (1 events)

2015 Luddite of the Year Award

Locations (1)

Location Context

Relationships (3)

from

Key Quotes (4)

"A more apt analogy would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we’d arrived."
Source
HOUSE_OVERSIGHT_016835.jpg
Quote #1
"The purpose of understanding and preventing the risks of AI is to ensure that we can realize the benefits."
Source
HOUSE_OVERSIGHT_016835.jpg
Quote #2
"Bostrom, for example, writes that success in controlling AI will result in “a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment”"
Source
HOUSE_OVERSIGHT_016835.jpg
Quote #3
"Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives."
Source
HOUSE_OVERSIGHT_016835.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,648 characters)

risk easily managed and far in the future, but also it’s extremely unlikely that
we’d even try to move billions of humans to Mars in the first place. The analogy
is a false one, however. We are already devoting huge scientific and technical
resources to creating ever-more-capable AI systems. A more apt analogy would
be a plan to move the human race to Mars with no consideration for what we
might breathe, drink, or eat once we’d arrived.
• Human-level AI isn’t really imminent, in any case. The AI100 report, for example,
assures us, “Contrary to the more fantastic predictions for AI in the popular press,
the Study Panel found no cause for concern that AI is an imminent threat to
humankind.” This argument simply misstates the reasons for concern, which are
not predicated on imminence. In his 2014 book, Superintelligence: Paths,
Dangers, Strategies, Nick Bostrom, for one, writes, “It is no part of the argument
in this book that we are on the threshold of a big breakthrough in artificial
intelligence, or that we can predict with any precision when such a development
might occur.”
• You’re just a Luddite. It’s an odd definition of Luddite that includes Turing,
Wiener, Minsky, Musk, and Gates, who rank among the most prominent
contributors to technological progress in the 20th and 21st centuries.4
Furthermore, the epithet represents a complete misunderstanding of the nature of
the concerns raised and the purpose for raising them. It is as if one were to accuse
nuclear engineers of Luddism if they pointed out the need for control of the
fission reaction. Some objectors also use the term “anti-AI,” which is rather like
calling nuclear engineers “anti-physics.” The purpose of understanding and
preventing the risks of AI is to ensure that we can realize the benefits. Bostrom,
for example, writes that success in controlling AI will result in “a civilizational
trajectory that leads to a compassionate and jubilant use of humanity’s cosmic
endowment”—hardly a pessimistic prediction.
• Any machine intelligent enough to cause trouble will be intelligent enough to have
appropriate and altruistic objectives.5 (Often, the argument adds the premise that
people of greater intelligence tend to have more altruistic objectives, a view that
may be related to the self-conception of those making the argument.) This
argument is related to Hume’s is-ought problem and G. E. Moore’s naturalistic
fallacy, suggesting that somehow the machine, as a result of its intelligence, will
simply perceive what is right, given its experience of the world. This is
implausible; for example, one cannot perceive, in the design of a chessboard and
chess pieces, the goal of checkmate; the same chessboard and pieces can be used
for suicide chess, or indeed many other games still to be invented. Put another
way: Where Bostrom imagines humans driven extinct by a putative robot that
turns the planet into a sea of paper clips, we humans see this outcome as tragic,
4 Elon Musk, Stephen Hawking, and others (including, apparently, the author) received the 2015 Luddite of
the Year Award from the Information Technology Innovation Foundation:
https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-
luddite-award.
5 Rodney Brooks, for example, asserts that it’s impossible for a program to be “smart enough that it would
be able to invent ways to subvert human society to achieve goals set for it by humans, without
understanding the ways in which it was causing problems for those same humans.”
http://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/.
32
HOUSE_OVERSIGHT_016835

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document