| Connected Entity | Relationship Type |
Strength
(mentions)
|
Documents | Actions |
|---|---|---|---|---|
|
person
Ben Goertzel
|
Professional academic |
5
|
1 | |
|
person
Author (Unknown)
|
Academic citation |
5
|
1 |
| Date | Event Type | Description | Location | Actions |
|---|---|---|---|---|
| 1967-06-01 | N/A | Six-Day War Aftermath | Israel | View |
| 1967-06-01 | N/A | Condolence Visit | Jerusalem | View |
This document discusses the critical necessity of AI safety research and goal alignment before the arrival of Artificial General Intelligence (AGI). It argues that the primary risk of superintelligent AI is competence rather than malice, emphasizing that an AI's goals must be beneficial and aligned with human values to prevent catastrophic outcomes similar to historical atrocities.
This document appears to be page 15 of a manuscript, book proposal, or essay collection (likely edited by John Brockman given the list of Edge.org contributors) discussing Artificial Intelligence and the work of Norbert Wiener. It contains quotes from prominent scientists and thinkers like Freeman Dyson, Stewart Brand, and Danny Hillis regarding the future of AI. The document is stamped 'HOUSE_OVERSIGHT_016818', indicating it was obtained as evidence during a Congressional investigation. The mention of 'the late Stephen Hawking' dates the writing of this specific text to after March 2018.
This document appears to be a page from a memoir (likely by Ehud Barak, given the context of House Oversight investigations into Epstein associates) describing the immediate aftermath of the Six-Day War in 1967. The narrator recounts the personal grief of visiting the brother (Eliezer/Cheetah) of a fallen comrade (Nechemia) and reflects on the profound psychological and physical changes in Israel following the expansion of its territory. The page is stamped with a House Oversight Bates number, indicating it was collected as part of a congressional investigation.
This document appears to be a page from an academic text or book (Page 225, Section 12.5) discussing 'Clarifying the Ethics of Justice' in the context of Artificial General Intelligence (AGI). The text argues that AGI ethics cannot be simply hard-coded but must be developed through shared experiences and interactions with humans, citing Eliezer Yudkowsky. The document bears the stamp 'HOUSE_OVERSIGHT_013141', indicating it was part of a document production for a Congressional investigation, likely related to Jeffrey Epstein's connections to the scientific community, though Epstein is not explicitly named on this specific page.
This document is page 206 from an academic text (likely a book or paper titled 'The Engineering and Development of Ethics') discussing Artificial General Intelligence (AGI) and the concept of 'Friendly AI.' It reviews theories by researchers Eliezer Yudkowsky, Ben Goertzel, Hugo de Garis, and Mark Waser regarding the risks and ethics of superintelligent systems. The document bears the stamp 'HOUSE_OVERSIGHT_013122,' indicating it was produced as evidence for the House Oversight Committee, likely in relation to investigations into Jeffrey Epstein's funding of science and transhumanist research.
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein entity