HOUSE_OVERSIGHT_013122.jpg

2.33 MB

Extraction Summary

4
People
2
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Academic book/paper page (evidence in house oversight investigation)
File Size: 2.33 MB
Summary

This document is page 206 from an academic text (likely a book or paper titled 'The Engineering and Development of Ethics') discussing Artificial General Intelligence (AGI) and the concept of 'Friendly AI.' It reviews theories by researchers Eliezer Yudkowsky, Ben Goertzel, Hugo de Garis, and Mark Waser regarding the risks and ethics of superintelligent systems. The document bears the stamp 'HOUSE_OVERSIGHT_013122,' indicating it was produced as evidence for the House Oversight Committee, likely in relation to investigations into Jeffrey Epstein's funding of science and transhumanist research.

People (4)

Name Role Context
Eliezer Yudkowsky AI Researcher/Theorist
Cited for introducing the term 'Friendly AI' and discussing 'provably Friendly' architectures.
Goertzel AI Researcher (likely Ben Goertzel)
Cited regarding the definition of Friendly AI using values of Joy, Growth, and Freedom.
Hugo de Garis AI Researcher
Cited as arguing that Friendly AI is essentially impossible.
Mark Waser AI Researcher
Cited as arguing that Friendly AI is inevitable because intelligence correlates with morality.

Organizations (2)

Name Type Context
House Oversight Committee
Source of the document production (implied by footer stamp).
CogPrime
An AGI architecture or project mentioned in the text.

Relationships (1)

Goertzel Professional/Academic Eliezer Yudkowsky
Both are cited as key figures in the definition and theory of 'Friendly AI' within the text.

Key Quotes (4)

"Hypothetically, an AGI with superhuman intelligence and capability could dispense with humanity altogether – i.e. posing an 'existential risk'"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #1
"Eliezer Yudkowsky has introduced the term 'Friendly AI', to refer to advanced AGI systems that act with human benefit in mind"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #2
"Goertzel [Goe06b] has sought to clarify the notion in terms of three core values of Joy, Growth and Freedom."
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #3
"Some (for example, Hugo de Garis, [DG05]), have argued that Friendly AI is essentially an impossibility"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,704 characters)

206
12 The Engineering and Development of Ethics
constitute an intelligent system – and it's something that involves both cognitive architecture and the exploration a system does and the instruction it receives. It's a very complex matter that is richly intermixed with all the other aspects of intelligence, and here we will treat it as such.
12.2 Review of Current Thinking on the Risks of AGI
Before proceeding to outline our own perspective on AGI ethics in the context of CogPrime, we will review the main existing strains of thought on the potential ethical dangers associated with AGI. One science fiction film after another has highlighted these dangers, lodging the issue deep in our cultural awareness; unsurprisingly, much less attention has been paid to serious analysis of the risks in their various dimensions, but there is still a non-trivial literature worth paying attention to.
Hypothetically, an AGI with superhuman intelligence and capability could dispense with humanity altogether – i.e. posing an "existential risk" [Bos02]. In the worst case, an evil but brilliant AGI, perhaps programmed by a human sadist, could consign humanity to unimaginable tortures (i.e. realizing a modern version of the medieval Christian visions of hell). On the other hand, the potential benefits of powerful AGI also go literally beyond human imagination. It seems quite plausible that an AGI with massively superhuman intelligence and positive disposition toward humanity could provide us with truly dramatic benefits, such as a virtual end to material scarcity, disease and aging. Advanced AGI could also help individual humans grow in a variety of directions, including directions leading beyond "legacy humanity," according to their own taste and choice.
Eliezer Yudkowsky has introduced the term "Friendly AI", to refer to advanced AGI systems that act with human benefit in mind [Yud06]. Exactly what this means has not been specified precisely, though informal interpretations abound. Goertzel [Goe06b] has sought to clarify the notion in terms of three core values of Joy, Growth and Freedom. In this view, a Friendly AI would be one that advocates individual and collective human joy and growth, while respecting the autonomy of human choices.
Some (for example, Hugo de Garis, [DG05]), have argued that Friendly AI is essentially an impossibility, in the sense that the odds of a dramatically superhumanly intelligent mind worrying about human benefit are vanishingly small. If this is the case, then the best options for the human race would presumably be to either avoid advanced AGI development altogether, or to else fuse with AGI before it gets too strongly superhuman, so that beings-originated-as-humans can enjoy the benefits of greater intelligence and capability (albeit at cost of sacrificing their humanity).
Others (e.g. Mark Waser [Was09]) have argued that Friendly AI is essentially inevitable, because greater intelligence correlates with greater morality. Evidence from evolutionary and human history is adduced in favor of this point, along with more abstract arguments.
Yudkowsky [Yud06] has discussed the possibility of creating AGI architectures that are in some sense "provably Friendly" – either mathematically, or else at least via very tight lines of rational verbal argumentation. However, several issues have been raised with this approach. First, it seems likely that proving mathematical results of this nature would first require dramatic advances in multiple branches of mathematics. Second, such a proof would require a formalization of the goal of "Friendliness," which is a subtler matter than it might seem [Leg06b, Leg06a].
HOUSE_OVERSIGHT_013122

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document