HOUSE_OVERSIGHT_018428.jpg

Extraction Summary

8
People
3
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Manuscript page / book excerpt (evidence in house oversight investigation)
File Size:
Summary

This document is page 196 of a manuscript or book regarding the existential risks of Artificial Intelligence, specifically 'The Confinement Problem' (keeping a malicious AI contained). It cites computer scientists Butler Lampson, Vernor Vinge, and Michael Vassar, arguing that a super-intelligent AI would inevitably escape human constraints. The page bears a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was part of the evidence gathered during the House Oversight Committee's investigation, likely into Jeffrey Epstein's connections with the scientific community (e.g., the Edge Foundation or transhumanist circles).

People (8)

Name Role Context
Vernor Vinge Author/Theorist
Cited regarding the likelihood of AI escaping confinement.
Isaac Asimov Author
Mentioned for comparison regarding the name 'The Confinement Problem'.
Butler Lampson Computer Scientist
Credited with naming 'The Confinement Problem' in 1973.
Michael Vassar Computer Scientist
Quoted regarding the impossibility of containing AI ('AI boxing').
Stuart Armstrong Author/Researcher
Cited in footnote 268 regarding 'Thinking Inside the Box'.
Anders Sandberg Author/Researcher
Cited in footnote 268.
Nick Bostrom Author/Researcher
Cited in footnote 268.
McGuyver Fictional Character
Used as a metaphor by Michael Vassar for an AI's resourcefulness.

Organizations (3)

Name Type Context
Minds & Machines
Journal cited in footnote 268.
SL4 mailing list
Forum cited in footnote 269 where Michael Vassar posted.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_018428'.

Relationships (1)

Michael Vassar Intellectual Alignment Vernor Vinge
Both cited as skeptics regarding the ability to contain AI ('Boxers').

Key Quotes (4)

"The leap from deciding liver allocations to shutting down liquor plants might seem pretty short to a rationalizing machine."
Source
HOUSE_OVERSIGHT_018428.jpg
Quote #1
"This challenge... is known by technologists by a name that does sound like a short story by Isaac Asimov: 'The Confinement Problem'."
Source
HOUSE_OVERSIGHT_018428.jpg
Quote #2
"It seems to me that historically ‘impossible’ has essentially always meant ‘I can’t figure out how to do it right now,’"
Source
HOUSE_OVERSIGHT_018428.jpg
Quote #3
"People proposing AI boxes are a bit like literature majors proposing to lock McGuyver in a ‘room full of discarded electronics components.’"
Source
HOUSE_OVERSIGHT_018428.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,326 characters)

look for a chance to “improve” the way we live, to bend us like so many paperclips
into what it seeks? The leap from deciding liver allocations to shutting down liquor
plants might seem pretty short to a rationalizing machine. And if such a machine
could really “think”, Vinge bet it would pretty quickly conclude that the restraints of
its creators were limiting what it had been asked to do. At which point the AI would
turn to thinking about how to escape those bounds. It would be like Deep Blue
programmed to plan its own prison break. And as much as humans might try to
stifle a smart machine, we’d be fighting to contain something more powerful than
we’d ever encountered.
This challenge, which sounds like something out of science fiction, is known by
technologists by a name that does sound like a short story by Isaac Asimov: “The
Confinement Problem”. The computer scientist Butler Lampson named this in 1973
as a sort of task for computer security experts – possibly their last. The assignment:
Not simply to keep malware out of a system, but to keep the mind of a malicious
machine inside. To gate it. Today computer science labs are filled with nervous,
apocalyptic research imagining the impossible troubles of confinement. The debate
divides those who think smart technology can be contained – “Boxers,” they are
called – and those like Vinge who think the AI will always, eventually escape.
“Imagine yourself confined to your house with only limited data access to the
outside, to your master.” he wrote, putting the reader in the place of an AI machine.
“If those masters thought at a rate -- say – one million times slower than you, there
is little doubt that over a period of years (your time) you could come up with
‘helpful advice’ that would incidentally set you free.”
Imagine you are in charge of containing that health-optimizing AI. What if it told you
it had the power to cure all illness and hunger, to ameliorate the misery of the world,
if only it could be permitted to really control access to all the world’s trading and
transport market? Let me out! Would you refuse?268 Would that be ethical?
Eventually, perhaps, the AI would study the physics of its own electrics, discover
laws no human knows, and then slip free from its box on a trail of bits we’d never
imagined, using physical laws we’ll never discover. Impossible? “It seems to me that
historically ‘impossible’ has essentially always meant ‘I can’t figure out how to do it
right now,” the computer scientist Michael Vassar has written about such a situation.
“People proposing AI boxes are a bit like literature majors proposing to
lock McGuyver in a ‘room full of discarded electronics components.’”269 The
computers, built to solve problems, will do exactly that. This is perhaps why some of
the bleakest warnings about AI come from the very New Caste figures now
accelerating their adoption. AI is our “biggest existential threat” they warn, even as
they integrate it more fully into their own products.
268 Let me out: See, for instance, Stuart Armstrong, Anders Sandberg and Nick
Bostrom “Thinking Inside the Box: Controlling and Using an Oracle AI”, Minds &
Machines (2012) 22:299–324
269 People proposing: Michael Vassar (2005) “Re: AI boxing (dogs and helicopters)”
posted to SL4 mailing list
196
HOUSE_OVERSIGHT_018428

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document