HOUSE_OVERSIGHT_016824.jpg

2.38 MB

Extraction Summary

8
People
0
Organizations
0
Locations
2
Events
2
Relationships
4
Quotes

Document Information

Type: Book chapter / essay / academic paper (evidence document)
File Size: 2.38 MB
Summary

A page from a scientific essay or book chapter discussing the history and skepticism of the 'technological singularity.' The author references work on quantum computers and cites historical figures like John von Neumann and Norbert Wiener, as well as modern figures like Ray Kurzweil, Stephen Hawking, and Elon Musk regarding the potential risks and benefits of AI and superintelligence. The document bears a House Oversight stamp, indicating it is part of an investigation file, likely related to Jeffrey Epstein's connections to the scientific community.

People (8)

Name Role Context
The Author Writer/Scientist
Discusses their own work with experimentalists on building quantum computers.
Wiener Scientist (Norbert Wiener)
Referenced regarding conversations with von Neumann, work on neuroscience, and prediction failures.
John von Neumann Mathematician/Scientist
Introduced the notion of 'technological singularity' in the 1950s.
Ray Kurzweil Author/Futurist
Cited for his 2005 book 'The Singularity is Near' and belief in merging brains with superintelligence.
Stephen Hawking Physicist
Cited as worrying that superintelligence could be malign and a threat to civilization.
Elon Musk Entrepreneur
Cited as worrying that superintelligence could be malign and a threat to civilization.
McCulloch Scientist
Mentioned in relation to neuroscience and deep-learning methods.
Pitts Scientist
Mentioned in relation to neuroscience and deep-learning methods.

Timeline (2 events)

1950s
Conversations between Wiener and John von Neumann regarding the technological singularity.
Unknown
2005
Publication of Ray Kurzweil's 'The Singularity is Near'.
Unknown

Relationships (2)

John von Neumann Professional/Intellectual Wiener
Conversations in the 1950s inspired the notion of technological singularity.
The Author Professional/Collaborative Experimentalists
Work together on building quantum computers.

Key Quotes (4)

"Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation."
Source
HOUSE_OVERSIGHT_016824.jpg
Quote #1
"In the 1950s, partly inspired by conversations with Wiener, John von Neumann introduced the notion of the 'technological singularity.'"
Source
HOUSE_OVERSIGHT_016824.jpg
Quote #2
"Humans can merge their brains with the superintelligence and thereby live forever."
Source
HOUSE_OVERSIGHT_016824.jpg
Quote #3
"Stephen Hawking and Elon Musk, worried that this superintelligence would prove to be malign and regarded it as the greatest existing threat to human civilization."
Source
HOUSE_OVERSIGHT_016824.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,550 characters)

powerful room-cleaning robot was a Roomba, which moved around vacuuming at
random and squeaked when it got caught under the couch.
Technological prediction is particularly chancy, given that technologies progress
by a series of refinements, halted by obstacles and overcome by innovation. Many
obstacles and some innovations can be anticipated, but more cannot. In my own work
with experimentalists on building quantum computers, I typically find that some of the
technological steps I expect to be easy turn out to be impossible, whereas some of the
tasks I imagine to be impossible turn out to be easy. You don’t know until you try.
In the 1950s, partly inspired by conversations with Wiener, John von Neumann
introduced the notion of the “technological singularity.” Technologies tend to improve
exponentially, doubling in power or sensitivity over some interval of time. (For
example, since 1950, computer technologies have been doubling in power roughly
every two years, an observation enshrined as Moore’s Law.) Von Neumann
extrapolated from the observed exponential rate of technological improvement to
predict that “technological progress will become incomprehensibly rapid and
complicated,” outstripping human capabilities in the not too distant future. Indeed, if
one extrapolates the growth of raw computing power—expressed in terms of bits and
bit flips—into the future at its current rate, computers should match human brains
sometime in the next two to four decades (depending on how one estimates the
information-processing power of human brains).
The failure of the initial overly optimistic predictions of AI dampened talk about
the technological singularity for a few decades, but since the 2005 publication of Ray
Kurzweil’s The Singularity is Near, the idea of technological advance leading to
superintelligence is back in force. Some believers, Kurzweil included, regard this
singularity as an opportunity: Humans can merge their brains with the
superintelligence and thereby live forever. Others, such as Stephen Hawking and Elon
Musk, worried that this superintelligence would prove to be malign and regarded it as
the greatest existing threat to human civilization. Still others, including some of the
contributors to the present volume, think such talk is overblown.
Wiener’s life work and his failure to predict its consequences are intimately
bound up in the idea of an impending technological singularity. His work on
neuroscience and his initial support of McCulloch and Pitts adumbrated the startlingly
effective deep-learning methods of the present day. Over the past decade, and
particularly in the last five years, such deep-learning techniques have finally exhibited
what Wiener liked to call Gestalt—for example, the ability to recognize that a circle is
a circle even if when slanted sideways it looks like an ellipse. His work on control,
combined with his work on neuromuscular feedback, was significant for the
development of robotics and is the inspiration for neural-based human/machine
interfaces. His lapses in technological prediction, however, suggest that we should
take the notion of a technological singularity with a grain of salt. The general
difficulties of technological prediction and the problems specific to the development of
a superintelligence should warn us against overestimating both the power and the
efficacy of information processing.
The Arguments for Singularity Skepticism
No exponential increase lasts forever. An atomic explosion grows exponentially, but
21
HOUSE_OVERSIGHT_016824

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document