HOUSE_OVERSIGHT_016832.jpg

1.84 MB

Extraction Summary

8
People
2
Organizations
1
Locations
2
Events
2
Relationships
3
Quotes

Document Information

Type: Essay / academic paper / book chapter
File Size: 1.84 MB
Summary

This document appears to be a page from an essay or book chapter titled 'The Purpose Put Into The Machine' by AI expert Stuart Russell. The text analyzes the historical warnings of Norbert Wiener regarding artificial intelligence, specifically the danger of machines executing objectives that do not align with human desires (the 'djinnee in a bottle' problem). While the text itself is academic, the footer 'HOUSE_OVERSIGHT_016832' indicates this document was part of the House Oversight Committee's investigation, likely regarding Jeffrey Epstein's cultivation of relationships with prominent scientists and academics.

People (8)

Name Role Context
Stuart Russell Author
Professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley; coauthor of 'Artificial Intell...
Peter Norvig Coauthor
Mentioned as coauthor with Stuart Russell.
Norbert Wiener Subject
Author of 'The Human Use of Human Beings' (1950); his views on AI and machine control are the central focus of the text.
Arthur Samuel Subject
Creator of a checker-playing program mentioned as an example of machine learning.
Elon Musk Cited Observer
Cited as an observer of existential risk from superintelligent AI.
Bill Gates Cited Observer
Cited as an observer of existential risk from superintelligent AI.
Stephen Hawking Cited Observer
Cited as an observer of existential risk from superintelligent AI.
Nick Bostrom Cited Observer
Cited as an observer of existential risk from superintelligent AI.

Organizations (2)

Name Type Context
UC Berkeley
Affiliation of author Stuart Russell.
Science
Academic journal where Norbert Wiener published 'Some Moral and Technical Consequences of Automation'.

Timeline (2 events)

1950
Publication of 'The Human Use of Human Beings' by Norbert Wiener.
N/A
1960s and 1970s
Period described where the prevailing theoretical notion of intelligence was the capacity for logical reasoning.
N/A

Locations (1)

Location Context
Academic institution associated with the author.

Relationships (2)

Stuart Russell Coauthors Peter Norvig
He is the coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach.
Stuart Russell Academic Analysis Norbert Wiener
Russell analyzes Wiener's work throughout the text.

Key Quotes (3)

"Woe to us if we let [the machine] decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us!"
Source
HOUSE_OVERSIGHT_016832.jpg
Quote #1
"we had better be quite sure that the purpose put into the machine is the purpose which we really desire."
Source
HOUSE_OVERSIGHT_016832.jpg
Quote #2
"The goal of AI research has been to understand the principles underlying intelligent behavior and to build those principles into machines that can then exhibit such behavior."
Source
HOUSE_OVERSIGHT_016832.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (2,963 characters)

THE PURPOSE PUT INTO THE MACHINE
Stuart Russell
Stuart Russell is a professor of computer science and Smith-Zadeh Professor in
Engineering at UC Berkeley. He is the coauthor (with Peter Norvig) of Artificial
Intelligence: A Modern Approach.
Among the many issues raised in Norbert Wiener’s The Human Use of Human Beings
(1950) that are currently relevant, the most significant to the AI researcher is the
possibility that humanity may cede control over its destiny to machines.
Wiener considered the machines of the near future as far too limited to exert global
control, imagining instead that machines and machine-like control systems would be
wielded by human elites to reduce the great mass of humanity to the status of “cogs and
levers and rods.” Looking further ahead, he pointed to the difficulty of correctly
specifying objectives for highly capable machines, noting
a few of the simpler and more obvious truths of life, such as that when a djinnee is
found in a bottle, it had better be left there; that the fisherman who craves a boon
from heaven too many times on behalf of his wife will end up exactly where he
started; that if you are given three wishes, you must be very careful what you wish
for.
The dangers are clear enough:
Woe to us if we let [the machine] decide our conduct, unless we have previously
examined the laws of its action, and know fully that its conduct will be carried out on
principles acceptable to us! On the other hand, the machine like the djinnee, which
can learn and can make decisions on the basis of its learning, will in no way be
obliged to make such decisions as we should have made, or will be acceptable to us.
Ten years later, after seeing Arthur Samuel’s checker-playing program learn to play
checkers far better than its creator, Wiener published “Some Moral and Technical
Consequences of Automation” in Science. In this paper, the message is even clearer:
If we use, to achieve our purposes, a mechanical agency with whose operation we
cannot efficiently interfere . . . we had better be quite sure that the purpose put into
the machine is the purpose which we really desire. . . .
In my view, this is the source of the existential risk from superintelligent AI cited in
recent years by such observers as Elon Musk, Bill Gates, Stephen Hawking, and Nick
Bostrom.
Putting Purposes Into Machines
The goal of AI research has been to understand the principles underlying intelligent
behavior and to build those principles into machines that can then exhibit such behavior.
In the 1960s and 1970s, the prevailing theoretical notion of intelligence was the capacity
for logical reasoning, including the ability to derive plans of action guaranteed to achieve
a specified goal. More recently, a consensus has emerged around the idea of a rational
29
HOUSE_OVERSIGHT_016832

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document