HOUSE_OVERSIGHT_016834.jpg

2.21 MB

Extraction Summary

6
People
3
Organizations
3
Locations
3
Events
1
Relationships
6
Quotes

Document Information

Type: Draft manuscript / academic essay page
File Size: 2.21 MB
Summary

This document appears to be page 31 of a manuscript or report discussing the existential risks of Artificial Intelligence. The text argues against common dismissals of AI danger (such as the ability to 'switch it off' or that it is 'impossible'), using historical analogies involving nuclear physics (Rutherford and Szilard) and hypothetical scenarios. The document bears a 'HOUSE_OVERSIGHT' stamp, indicating it is part of evidence collected during a congressional investigation, likely related to Jeffrey Epstein's ties to the scientific community and academia.

People (6)

Name Role Context
Ernest Rutherford Physicist
Quoted regarding his skepticism of nuclear power in 1933.
Leo Szilard Physicist
Credited with inventing the neutron-induced nuclear chain reaction shortly after Rutherford's dismissal of the idea.
Andrew Ng AI Researcher
Quoted comparing worrying about AI risks to worrying about overpopulation on Mars.
Jeff Hawkins AI Researcher
Cited in footnote 2 regarding the ability to switch off computer networks.
Peter Stone AI Researcher
Cited in footnote 3 as part of the AI100 report.
Alan Turing Mathematician/Computer Scientist
Mentioned as a historical reference point for AI research.

Organizations (3)

Name Type Context
Columbia University
Location where Leo Szilard demonstrated the nuclear reaction.
Stanford University
Sponsor of the AI100 report mentioned in footnote 3.
Recode
News outlet cited in footnote 2 URL.

Timeline (3 events)

2067
Hypothetical date used in an analogy regarding an asteroid collision.
Earth
September 11, 1933
Ernest Rutherford states that atomic power is 'moonshine'.
Unknown
September 12, 1933
Leo Szilard invents the neutron-induced nuclear chain reaction.
Unknown

Locations (3)

Location Context
Laboratory location.
Used in an analogy by Andrew Ng.
Used in an asteroid collision analogy.

Relationships (1)

Leo Szilard Scientific Peers/Contemporaries Ernest Rutherford
Szilard's invention contradicted Rutherford's statement made one day prior.

Key Quotes (6)

"Anyone who expects a source of power from the transformation of these atoms is talking moonshine."
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #1
"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #2
"It’s like worrying about overpopulation on Mars."
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #3
"Yes, I am driving toward a cliff—in fact, I’m pressing the pedal to the metal! But trust me, we’ll run out of gas before we get there!"
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #4
"Some intelligent machines will be virtual, meaning they will exist and act solely within computer networks."
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #5
"Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible."
Source
HOUSE_OVERSIGHT_016834.jpg
Quote #6

Full Extracted Text

Complete text extracted from the document (3,482 characters)

imperfectly specified objectives conflicting with our own—whose motivation to preserve their existence in order to achieve those objectives may be insuperable.
1001 Reasons to Pay No Attention
Objections have been raised to these arguments, primarily by researchers within the AI community. The objections reflect a natural defensive reaction, coupled perhaps with a lack of imagination about what a superintelligent machine could do. None hold water on closer examination. Here are some of the more common ones:
• Don’t worry, we can just switch it off.² This is often the first thing that pops into a layperson’s head when considering risks from superintelligent AI—as if a superintelligent entity would never think of that. This is rather like saying that the risk of losing to DeepBlue or AlphaGo is negligible—all one has to do is make the right moves.
• Human-level or superhuman AI is impossible.³ This is an unusual claim for AI researchers to make, given that, from Turing onward, they have been fending off such claims from philosophers and mathematicians. The claim, which is backed by no evidence, appears to concede that if superintelligent AI were possible, it would be a significant risk. It’s as if a bus driver, with all of humanity as passengers, said, “Yes, I am driving toward a cliff—in fact, I’m pressing the pedal to the metal! But trust me, we’ll run out of gas before we get there!” The claim represents a foolhardy bet against human ingenuity. We have made such bets before and lost. On September 11, 1933, renowned physicist Ernest Rutherford stated, with utter confidence, “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” On September 12, 1933, Leo Szilard invented the neutron-induced nuclear chain reaction. A few years later he demonstrated such a reaction in his laboratory at Columbia University. As he recalled in a memoir: “We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.”
• It’s too soon to worry about it. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how much time is needed to devise and implement a solution that avoids the risk. For example, if we were to detect a large asteroid predicted to collide with the Earth in 2067, would we say, “It’s too soon to worry”? And if we consider the global catastrophic risks from climate change predicted to occur later in this century, is it too soon to take action to prevent them? On the contrary, it may be too late. The relevant timescale for human-level AI is less predictable, but, like nuclear fission, it might arrive considerably sooner than expected. One variation on this argument is Andrew Ng’s statement that it’s “like worrying about overpopulation on Mars.” This appeals to a convenient analogy: Not only is the
² AI researcher Jeff Hawkins, for example, writes, “Some intelligent machines will be virtual, meaning they will exist and act solely within computer networks. . . . It is always possible to turn off a computer network, even if painful.” https://www.recode.net/2015/3/2/11559576/.
³ The AI100 report (Peter Stone et al.), sponsored by Stanford University, includes the following: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.” https://ai100.stanford.edu/2016-report.
31
HOUSE_OVERSIGHT_016834

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document