HOUSE_OVERSIGHT_016875.jpg

2.23 MB

Extraction Summary

5
People
0
Organizations
3
Locations
0
Events
1
Relationships
6
Quotes

Document Information

Type: Essay / article excerpt (house oversight committee evidence)
File Size: 2.23 MB
Summary

This document appears to be page 72 of a larger text, stamped with 'HOUSE_OVERSIGHT_016875', indicating it is part of an evidentiary submission to the House Oversight Committee. The text is an essay or chapter discussing the existential risks of Artificial Intelligence, specifically the 'Control Problem,' drawing parallels to biological evolution. It references historical figures like Turing, Wiener, and Good, and argues that humanity is facing the end of the 'human-brain regime' as AI advances.

People (5)

Name Role Context
Turing Computer Scientist / Mathematician
Referenced for his predictions regarding superhuman AI and the 'machine thinking method'.
Wiener Mathematician / Philosopher
Referenced alongside Turing and Good regarding original warnings about AI.
Good Mathematician / Cryptologist
Referenced alongside Turing and Wiener regarding original warnings about AI.
Unnamed AI Researcher Leading AI Researcher
Confessed to the author that they would be relieved if human-level AI was impossible to create.
Author Writer/Researcher
First-person narrator discussing AI risk and the 'Control Problem'.

Locations (3)

Location Context
Context of the 'human-brain regime'.
Extension of the argument regarding the 'human-brain regime'.
Extension of the argument regarding the 'human-brain regime'.

Relationships (1)

Author Professional/Confidant Leading AI Researcher
Researcher 'confessed to me' regarding fears/hopes about AI development.

Key Quotes (6)

"Evolution’s Fatal Mistake"
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #1
"The planet has gone from producing forests to producing cities."
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #2
"Look around you—you’re witnessing the final decades of a hundred-thousand-year regime."
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #3
"One of the world’s leading AI researchers recently confessed to me that he would be greatly relieved to learn that human-level AI was impossible for us to create."
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #4
"Imagine an AI developer being stopped in his tracks because he couldn’t manage to adjust the font size on his computer!"
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #5
"In that sense, evolution has fallen victim to its own Control Problem."
Source
HOUSE_OVERSIGHT_016875.jpg
Quote #6

Full Extracted Text

Complete text extracted from the document (3,368 characters)

its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us.”) Apparently, the original dissidents promulgating the AI-risk message were the AI pioneers themselves!
Evolution’s Fatal Mistake
There have been many arguments, some sophisticated and some less so, for why the Control Problem is real and not some science-fiction fantasy. Allow me to offer one that illustrates the magnitude of the problem:
For the last hundred thousand years, the world (meaning the Earth, but the argument extends to the solar system and possibly even to the entire universe) has been in the human-brain regime. In this regime, the brains of Homo sapiens have been the most sophisticated future-shaping mechanisms (indeed, some have called them the most complicated objects in the universe). Initially, we didn’t use them for much beyond survival and tribal politics in a band of foragers, but now their effects are surpassing those of natural evolution. The planet has gone from producing forests to producing cities.
As predicted by Turing, once we have superhuman AI (“the machine thinking method”), the human-brain regime will end. Look around you—you’re witnessing the final decades of a hundred-thousand-year regime. This thought alone should give people some pause before they dismiss AI as just another tool. One of the world’s leading AI researchers recently confessed to me that he would be greatly relieved to learn that human-level AI was impossible for us to create.
Of course, it might still take us a long time to develop human-level AI. But we have reason to suspect that this is not the case. After all, it didn’t take long, in relative terms, for evolution—the blind and clumsy optimization process—to create human-level intelligence once it had animals to work with. Or multicellular life, for that matter: Getting cells to stick together seems to have been much harder for evolution to accomplish than creating humans once there were multicellular organisms. Not to mention that our level of intelligence was limited by such grotesque factors as the width of the birth canal. Imagine an AI developer being stopped in his tracks because he couldn’t manage to adjust the font size on his computer!
There’s an interesting symmetry here: In fashioning humans, evolution created a system that is, at least in many important dimensions, a more powerful planner and optimizer than evolution itself is. We are the first species to understand that we’re the product of evolution. Moreover, we’ve created many artifacts (radios, firearms, spaceships) that evolution would have little hope of creating. Our future, therefore, will be determined by our own decisions and no longer by biological evolution. In that sense, evolution has fallen victim to its own Control Problem.
We can only hope that we’re smarter than evolution in that sense. We are smarter, of course, but will that be enough? We’re about to find out.
The Present Situation
So here we are, more than half a century after the original warnings by Turing, Wiener, and Good, and a decade after people like me started paying attention to the AI-risk message. I’m glad to see that we’ve made a lot of progress in confronting this issue, but we’re definitely not there yet. AI risk, although no longer a taboo topic, is not yet fully
72
HOUSE_OVERSIGHT_016875

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document