HOUSE_OVERSIGHT_013122.jpg

2.33 MB
View Original

Extraction Summary

4
People
2
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Academic book/paper page (evidence in house oversight investigation)
File Size: 2.33 MB
Summary

This document is page 206 from an academic text (likely a book or paper titled 'The Engineering and Development of Ethics') discussing Artificial General Intelligence (AGI) and the concept of 'Friendly AI.' It reviews theories by researchers Eliezer Yudkowsky, Ben Goertzel, Hugo de Garis, and Mark Waser regarding the risks and ethics of superintelligent systems. The document bears the stamp 'HOUSE_OVERSIGHT_013122,' indicating it was produced as evidence for the House Oversight Committee, likely in relation to investigations into Jeffrey Epstein's funding of science and transhumanist research.

People (4)

Name Role Context
Eliezer Yudkowsky AI Researcher/Theorist
Cited for introducing the term 'Friendly AI' and discussing 'provably Friendly' architectures.
Goertzel AI Researcher (likely Ben Goertzel)
Cited regarding the definition of Friendly AI using values of Joy, Growth, and Freedom.
Hugo de Garis AI Researcher
Cited as arguing that Friendly AI is essentially impossible.
Mark Waser AI Researcher
Cited as arguing that Friendly AI is inevitable because intelligence correlates with morality.

Organizations (2)

Name Type Context
House Oversight Committee
Source of the document production (implied by footer stamp).
CogPrime
An AGI architecture or project mentioned in the text.

Relationships (1)

Goertzel Professional/Academic Eliezer Yudkowsky
Both are cited as key figures in the definition and theory of 'Friendly AI' within the text.

Key Quotes (4)

"Hypothetically, an AGI with superhuman intelligence and capability could dispense with humanity altogether – i.e. posing an 'existential risk'"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #1
"Eliezer Yudkowsky has introduced the term 'Friendly AI', to refer to advanced AGI systems that act with human benefit in mind"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #2
"Goertzel [Goe06b] has sought to clarify the notion in terms of three core values of Joy, Growth and Freedom."
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #3
"Some (for example, Hugo de Garis, [DG05]), have argued that Friendly AI is essentially an impossibility"
Source
HOUSE_OVERSIGHT_013122.jpg
Quote #4

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document