This document is page 206 from an academic text (likely a book or paper titled 'The Engineering and Development of Ethics') discussing Artificial General Intelligence (AGI) and the concept of 'Friendly AI.' It reviews theories by researchers Eliezer Yudkowsky, Ben Goertzel, Hugo de Garis, and Mark Waser regarding the risks and ethics of superintelligent systems. The document bears the stamp 'HOUSE_OVERSIGHT_013122,' indicating it was produced as evidence for the House Oversight Committee, likely in relation to investigations into Jeffrey Epstein's funding of science and transhumanist research.
| Name | Role | Context |
|---|---|---|
| Eliezer Yudkowsky | AI Researcher/Theorist |
Cited for introducing the term 'Friendly AI' and discussing 'provably Friendly' architectures.
|
| Goertzel | AI Researcher (likely Ben Goertzel) |
Cited regarding the definition of Friendly AI using values of Joy, Growth, and Freedom.
|
| Hugo de Garis | AI Researcher |
Cited as arguing that Friendly AI is essentially impossible.
|
| Mark Waser | AI Researcher |
Cited as arguing that Friendly AI is inevitable because intelligence correlates with morality.
|
| Name | Type | Context |
|---|---|---|
| House Oversight Committee |
Source of the document production (implied by footer stamp).
|
|
| CogPrime |
An AGI architecture or project mentioned in the text.
|
"Hypothetically, an AGI with superhuman intelligence and capability could dispense with humanity altogether – i.e. posing an 'existential risk'"Source
"Eliezer Yudkowsky has introduced the term 'Friendly AI', to refer to advanced AGI systems that act with human benefit in mind"Source
"Goertzel [Goe06b] has sought to clarify the notion in terms of three core values of Joy, Growth and Freedom."Source
"Some (for example, Hugo de Garis, [DG05]), have argued that Friendly AI is essentially an impossibility"Source
Complete text extracted from the document (3,704 characters)
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document