HOUSE_OVERSIGHT_016874.jpg

2.12 MB

Extraction Summary

6
People
6
Organizations
3
Locations
3
Events
3
Relationships
3
Quotes

Document Information

Type: Manuscript / essay / memoir (evidence in house oversight investigation)
File Size: 2.12 MB
Summary

This document appears to be a page from a manuscript or essay (likely by Skype co-founder Jaan Tallinn, given the Estonia and Skype references) discussing the existential risks of Artificial Intelligence. Included in House Oversight documents, the text draws parallels between political dissidence in Estonia and the 'dissident' warning of AI risk, citing figures like Eliezer Yudkowsky, Bill Joy, Alan Turing, and I.J. Good. The page focuses on the author's realization of AI dangers and their failed initial attempt to convince their Skype colleagues of the threat.

People (6)

Name Role Context
The Narrator Author
Refers to 'myself included' regarding Estonia, mentions 'my Skype colleagues'. Context implies this is likely Jaan Ta...
Yudkowsky AI Researcher/Blogger
Author of a blog about AI risk; met with the narrator in California.
Bill Joy Co-founder/Chief Scientist
Co-founder of Sun Microsystems; author of Wired article 'Why the Future Doesn't Need Us'.
Alan Turing Computer Scientist
Quoted regarding AI taking control (1951 lecture).
I. J. Good Mathematician/Cryptologist
Bletchley Park colleague of Turing; quoted regarding ultraintelligent machines.
Norbert Wiener Mathematician/Author
Author of 'The Human Use of Human Beings'; hinted at the 'Control Problem'.

Organizations (6)

Name Type Context
Skype
Narrator mentions trying to interest his 'Skype colleagues' in AI risk.
Wired
Magazine that published Bill Joy's article.
Sun Microsystems
Company co-founded by Bill Joy.
Bletchley Park
Workplace of Alan Turing and I.J. Good.
Academic Press
Publisher mentioned in footnote 22.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT'.

Timeline (3 events)

1991
Estonia regained its independence.
Estonia
People of Estonia
1994
Last Soviet troops left Estonia (referenced as 'three years later' after 1991).
Estonia
Soviet troops
Undated
Meeting between the narrator and Yudkowsky.
California

Locations (3)

Location Context
Discussed in the context of regaining independence from the Soviet Union.
Region mentioned regarding political change.
Location where the narrator met Yudkowsky.

Relationships (3)

The Narrator Professional/Intellectual Yudkowsky
Narrator read Yudkowsky's blog and arranged a meeting in California.
The Narrator Professional Skype Colleagues
Narrator tried to interest Skype colleagues in AI risk warnings.
Alan Turing Colleagues I. J. Good
Described as 'his Bletchley Park colleague'.

Key Quotes (3)

"Continued progress in AI can precipitate a change of cosmic proportions—a runaway process that will likely kill everyone."
Source
HOUSE_OVERSIGHT_016874.jpg
Quote #1
"Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. . . . [O]ne bot can become many, and quickly get out of control."
Source
HOUSE_OVERSIGHT_016874.jpg
Quote #2
"The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
Source
HOUSE_OVERSIGHT_016874.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,266 characters)

mainstream groups, who had more to lose, initially qualified and diluted the message,
taking positions like, “It would make sense in the long term to delegate control over local
matters.” (There were always exceptions: Some public intellectuals proclaimed the
original dissident message verbatim.) Finally, the original message—being, simply,
true—won out over its diluted versions. Estonia regained its independence in 1991, and
the last Soviet troops left three years later.
The people who took the risk and spoke the truth in Estonia and elsewhere in the
Eastern Bloc played a monumental role in the eventual outcome—an outcome that
changed the lives of hundreds of millions of people, myself included. They spoke the
truth, even as their voices trembled.
The Second Message: AI Risk
My exposure to the second revolutionary message was via Yudkowsky’s blog—the blog
that compelled me to reach out and arrange that meeting in California. The message was:
Continued progress in AI can precipitate a change of cosmic proportions—a runaway
process that will likely kill everyone. We need to put in a lot of extra effort to avoid that
outcome.
After my meeting with Yudkowsky, the first thing I did was try to interest my
Skype colleagues and close collaborators in his warning. I failed. The message was too
crazy, too dissident. Its time had not yet come.
Only later did I learn that Yudkowsky wasn’t the original dissident speaking this
particular truth. In April 2000, there was a lengthy opinion piece in Wired titled, “Why
the Future Doesn’t Need Us,” by Bill Joy, co-founder and chief scientist of Sun
Microsystems. He warned:
Accustomed to living with almost routine scientific breakthroughs, we have yet
to come to terms with the fact that the most compelling 21st-century
technologies—robotics, genetic engineering, and nanotechnology—pose a
different threat than the technologies that have come before. Specifically, robots,
engineered organisms, and nanobots share a dangerous amplifying factor: They
can self-replicate. . . . [O]ne bot can become many, and quickly get out of
control.
Apparently, Joy’s broadside caused a lot of furor but little action.
More surprising to me, though, was that the AI-risk message arose almost
simultaneously with the field of computer science. In a 1951 lecture, Alan Turing
announced: “[I]t seems probable that once the machine thinking method had started, it
would not take long to outstrip our feeble powers. . . . At some stage, therefore, we
should have to expect the machines to take control. . . .”21 A decade or so later, his
Bletchley Park colleague I. J. Good wrote, “The first ultraintelligent machine is the last
invention that man need ever make, provided that the machine is docile enough to tell us
how to keep it under control.”22 Indeed, I counted half a dozen places in The Human Use
of Human Beings where Wiener hinted at one or another aspect of the Control Problem.
(“The machine like the djinnee, which can learn and can make decisions on the basis of
21 Posthumously reprinted in Phil. Math. (3) vol. 4, 256-60 (1966).
22 Irving John Good, “Speculations concerning the first ultraintelligent machine,” Advances in Computers,
vol. 6 (Academic Press, 1965), pp. 31-88.
71
HOUSE_OVERSIGHT_016874

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document