HOUSE_OVERSIGHT_013121.jpg

1.68 MB
View Original

Extraction Summary

3
People
2
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Book chapter / academic manuscript
File Size: 1.68 MB
Summary

This document is page 205 of a book or academic manuscript (Chapter 12), stamped as evidence by the House Oversight Committee. It discusses the engineering of ethics in Artificial General Intelligence (AGI), specifically regarding the 'CogPrime' architecture. The text argues that ethics cannot be an add-on module but must be integral to the design process, and outlines five key risks associated with AGI development, including systems going rogue or the moral implications of AGI 'slavery'. While Jeffrey Epstein is not named on this specific page, the document is likely part of the investigation into his funding of scientific research and AI projects.

People (3)

Name Role Context
Stephan Vladimir Bugaj Co-author
Listed as a co-author of Chapter 12 under the title.
Joel Pitt Co-author
Listed as a co-author of Chapter 12 under the title.
Unnamed Primary Author Author
Implied by the phrase 'Co-authored with...' (Likely Ben Goertzel based on the subject matter 'CogPrime').

Organizations (2)

Name Type Context
CogPrime
An AGI (Artificial General Intelligence) project or architecture discussed in the text.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013121' at the bottom right.

Relationships (1)

Stephan Vladimir Bugaj Co-authors Joel Pitt
Listed together as co-authors under the chapter title.

Key Quotes (4)

"In the CogPrime approach, ethics is not a particularly distinct topic, being richly interwoven with cognition and education and other aspects of the AGI project."
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #1
"Risks posed by AGI systems with initially well-defined and sensible ethical systems eventually going rogue – an especially big risk if these systems are more generally intelligent than humans, and possess the capability to modify their own source code"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #2
"ethicalness is probably not something that one can meaningfully tack onto an AGI system at the end, after developing the rest – it is likely infeasible to architect an intelligent agent and then add on an 'ethics module.'"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #3
"AGI rights: in what circumstances does using an AGI as a tool or servant constitute 'slavery'"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #4

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document