HOUSE_OVERSIGHT_016868.jpg

2.37 MB

Extraction Summary

1
People
1
Organizations
2
Locations
2
Events
0
Relationships
5
Quotes

Document Information

Type: Manuscript/book excerpt (likely from a scientific or philosophical text on ai)
File Size: 2.37 MB
Summary

This document appears to be page 65 of a book or manuscript discussing the existential risks and economic implications of Artificial General Intelligence (AGI). It references the 2015 Puerto Rico AI conference and the 2017 Asilomar AI Principles, arguing that economic incentives, human curiosity, and the desire for longevity are driving humanity toward AGI despite the risk of human obsolescence. The page bears a House Oversight Committee Bates stamp, indicating it is part of the evidence files from the investigation into Jeffrey Epstein, who was known to fund and associate with scientists in the AI field.

People (1)

Name Role Context
N/A N/A
No specific individuals are named on this page. The text references generic groups such as 'AI industry leaders', 'AI...

Organizations (1)

Name Type Context
House Oversight Committee
Identified via the Bates stamp 'HOUSE_OVERSIGHT' at the bottom of the page.

Timeline (2 events)

2015
Puerto Rico AI conference
Puerto Rico
AI researchers
2017
Establishment/Signing of Asilomar AI Principles
Asilomar (implied)
AI industry leaders over a thousand AI researchers

Locations (2)

Location Context
Location of the 2015 AI conference mentioned in the text.
Refers to the 2017 Asilomar AI Principles (location implied as Asilomar Conference Grounds, California).

Key Quotes (5)

"Why we’re rushing to make ourselves obsolete, and why we avoid talking about it"
Source
HOUSE_OVERSIGHT_016868.jpg
Quote #1
"The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines"
Source
HOUSE_OVERSIGHT_016868.jpg
Quote #2
"Sheer scientific curiosity without profit motive contributed to the discovery of nuclear weapons"
Source
HOUSE_OVERSIGHT_016868.jpg
Quote #3
"Curiosity killed the cat"
Source
HOUSE_OVERSIGHT_016868.jpg
Quote #4
"We will no longer be needed for anything, because all jobs can be done"
Source
HOUSE_OVERSIGHT_016868.jpg
Quote #5

Full Extracted Text

Complete text extracted from the document (3,554 characters)

And who are the “us”? Who should deem “such decisions . . . acceptable”? Even if future powers decide to help humans survive and flourish, how will we find meaning and purpose in our lives if we aren’t needed for anything?
The debate about the societal impact of AI has changed dramatically in the last few years. In 2014, what little public talk there was of AI risk tended to be dismissed as Luddite scaremongering, for one of two logically incompatible reasons:
(1) AGI was overhyped and wouldn’t happen for at least another century.
(2) AGI would probably happen sooner but was virtually guaranteed to be beneficial.
Today, talk of AI’s societal impact is everywhere, and work on AI safety and AI ethics has moved into companies, universities, and academic conferences. The controversial position on AI safety research is no longer to advocate for it but to dismiss it. Whereas the open letter that emerged from the 2015 Puerto Rico AI conference (and helped mainstream AI safety) spoke only in vague terms about the importance of keeping AI beneficial, the 2017 Asilomar AI Principles (see below) had real teeth: They explicitly mention recursive self-improvement, superintelligence, and existential risk, and were signed by AI industry leaders and over a thousand AI researchers from around the world.
Nonetheless, most discussion is limited to the near-term impact of narrow AI and the broader community pays only limited attention to the dramatic transformations that AGI may soon bring to life on Earth. Why?
Why we’re rushing to make ourselves obsolete, and why we avoid talking about it
First of all, there’s simple economics. Whenever we figure out how to make another type of human work obsolete by building machines that do it better and cheaper, most of society gains: Those who build and use the machines make profits, and consumers get more affordable products. This will be as true of future investor AGIs and scientist AGIs as it was of weaving machines, excavators, and industrial robots. In the past, displaced workers usually found new jobs, but this basic economic incentive will remain even if that is no longer the case. The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines, so anyone claiming that “people will always find new well-paying jobs” is in effect claiming that AI researchers will fail to build AGI.
Second, Homo sapiens is by nature curious, which will motivate the scientific quest for understanding intelligence and developing AGI even without economic incentives. Although curiosity is one of the most celebrated human attributes, it can cause problems when it fosters technology we haven’t yet learned how to manage wisely. Sheer scientific curiosity without profit motive contributed to the discovery of nuclear weapons and tools for engineering pandemics, so it’s not unthinkable that the old adage “Curiosity killed the cat” will turn out to apply to the human species as well.
Third, we’re mortal. This explains the near unanimous support for developing new technologies that help us live longer, healthier lives, which strongly motivates current AI research. AGI can clearly aid medical research even more. Some thinkers even aspire to near immortality via cyborgization or uploading.
We’re thus on the slippery slope toward AGI, with strong incentives to keep sliding downward, even though the consequence will by definition be our economic obsolescence. We will no longer be needed for anything, because all jobs can be done
65
HOUSE_OVERSIGHT_016868

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document