HOUSE_OVERSIGHT_016877.jpg

2.39 MB

Extraction Summary

4
People
0
Organizations
4
Locations
0
Events
1
Relationships
3
Quotes

Document Information

Type: Essay / book chapter / scientific paper
File Size: 2.39 MB
Summary

This document is page 74 of a larger work (essay or book) titled 'Calibrating the AI-Risk Message.' It discusses the existential risks of Artificial Intelligence, arguing that superintelligent AI poses an 'environmental risk' to biological life rather than just social or economic risks. The text references Norbert Wiener, Eliezer Yudkowsky, Douglas Adams, and Eric Drexler. The document bears the Bates number HOUSE_OVERSIGHT_016877, indicating it was part of a document production for a House Oversight Committee investigation, likely related to Jeffrey Epstein's known associations with scientists and transhumanists.

People (4)

Name Role Context
Wiener Scientist / Author (referenced)
Referenced regarding his warnings about social risks of machine-generated decisions (Likely Norbert Wiener).
Yudkowsky AI Researcher / Author (referenced)
Quoted regarding the scale of impact of machine superintelligence (Likely Eliezer Yudkowsky).
Douglas Adams Author (referenced)
Quoted for his 'Parable of the Sentient Puddle'.
Eric Drexler Inventor of Nanotechnology
Mentioned as recently popularizing a concept (sentence cut off).

Locations (4)

Location Context
Mentioned in hypothetical scenarios regarding environmental risk.
Mentioned in a metaphor by Yudkowsky.
US
Mentioned in context of US-Chinese trade patterns.
Mentioned in context of US-Chinese trade patterns.

Relationships (1)

Author Intellectual/Citation Yudkowsky
Author quotes Yudkowsky's blog post.

Key Quotes (3)

"[A]sking about the effect of machine superintelligence on the conventional human labor market is like asking how US–Chinese trade patterns would be affected by the Moon crashing into the Earth."
Source
— Yudkowsky (Quoted in a blog post regarding the parochial nature of current AI debates.)
HOUSE_OVERSIGHT_016877.jpg
Quote #1
"the moment he disappears catches him rather by surprise."
Source
— Douglas Adams (From the Parable of the Sentient Puddle.)
HOUSE_OVERSIGHT_016877.jpg
Quote #2
"In my view, the central point of the AI risk is that superintelligent AI is an environmental risk."
Source
— Author (The thesis statement of the page.)
HOUSE_OVERSIGHT_016877.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,595 characters)

Calibrating the AI-Risk Message
While uncannily prescient, the AI-risk message from the original dissidents has a giant
flaw—as does the version dominating current public discourse: Both considerably
understate the magnitude of the problem as well as AI’s potential upside. The message,
in other words, does not adequately convey the stakes of the game.
Wiener primarily warned of the social risks—risks stemming from careless
integration of machine-generated decisions with governance processes and misuse (by
humans) of such automated decision making. Likewise, the current “serious” debate
about AI risks focuses mostly on things like technological unemployment or biases in
machine learning. While such discussions can be valuable and address pressing short-
term problems, they are also stunningly parochial. I’m reminded of Yudkowsky’s quip in
a blog post: “[A]sking about the effect of machine superintelligence on the conventional
human labor market is like asking how US–Chinese trade patterns would be affected by
the Moon crashing into the Earth. There would indeed be effects, but you’d be missing
the point.”
In my view, the central point of the AI risk is that superintelligent AI is an
environmental risk. Allow me to explain.
In his “Parable of the Sentient Puddle,” Douglas Adams describes a puddle that
wakes up in the morning and finds himself in a hole that fits him “staggeringly well.”
From that observation, the puddle concludes that the world must have been made for him.
Therefore, writes Adams, “the moment he disappears catches him rather by surprise.” To
assume that AI risks are limited to adverse social developments is to make a similar
mistake. The harsh reality is that the universe was not made for us; instead, we are fine-
tuned by evolution to a very narrow range of environmental parameters. For instance, we
need the atmosphere at ground level to be roughly at room temperature, at about 100 kPa
pressure, and have a sufficient concentration of oxygen. Any disturbance, even
temporary, of this precarious equilibrium and we die in a matter of minutes.
Silicon-based intelligence does not share such concerns about the environment.
That’s why it’s much cheaper to explore space using machine probes rather than “cans of
meat.” Moreover, Earth’s current environment is almost certainly suboptimal for what a
superintelligent AI will greatly care about: efficient computation. Hence we might find
our planet suddenly going from anthropogenic global warming to machinogenic global
cooling. One big challenge that AI safety research needs to deal with is how to constrain
a potentially superintelligent AI—an AI with a much larger footprint than our own—from
rendering our environment uninhabitable for biological life-forms.
Interestingly, given that the most potent sources both of AI research and AI-risk
dismissals are under big corporate umbrellas, if you squint hard enough the “AI as an
environmental risk” message looks like the chronic concern about corporations skirting
their environmental responsibilities.
Conversely, the worry about AI’s social effects also misses most of the upside.
It’s hard to overemphasize how tiny and parochial the future of our planet is, compared
with the full potential of humanity. On astronomical timescales, our planet will be gone
soon (unless we tame the sun, also a distinct possibility) and almost all the resources—
atoms and free energy—to sustain civilization in the long run are in deep space.
Eric Drexler, the inventor of nanotechnology, has recently been popularizing the
74
HOUSE_OVERSIGHT_016877

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document