HOUSE_OVERSIGHT_013121.jpg

1.68 MB

Extraction Summary

3
People
2
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Book chapter / academic manuscript
File Size: 1.68 MB
Summary

This document is page 205 of a book or academic manuscript (Chapter 12), stamped as evidence by the House Oversight Committee. It discusses the engineering of ethics in Artificial General Intelligence (AGI), specifically regarding the 'CogPrime' architecture. The text argues that ethics cannot be an add-on module but must be integral to the design process, and outlines five key risks associated with AGI development, including systems going rogue or the moral implications of AGI 'slavery'. While Jeffrey Epstein is not named on this specific page, the document is likely part of the investigation into his funding of scientific research and AI projects.

People (3)

Name Role Context
Stephan Vladimir Bugaj Co-author
Listed as a co-author of Chapter 12 under the title.
Joel Pitt Co-author
Listed as a co-author of Chapter 12 under the title.
Unnamed Primary Author Author
Implied by the phrase 'Co-authored with...' (Likely Ben Goertzel based on the subject matter 'CogPrime').

Organizations (2)

Name Type Context
CogPrime
An AGI (Artificial General Intelligence) project or architecture discussed in the text.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013121' at the bottom right.

Relationships (1)

Stephan Vladimir Bugaj Co-authors Joel Pitt
Listed together as co-authors under the chapter title.

Key Quotes (4)

"In the CogPrime approach, ethics is not a particularly distinct topic, being richly interwoven with cognition and education and other aspects of the AGI project."
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #1
"Risks posed by AGI systems with initially well-defined and sensible ethical systems eventually going rogue – an especially big risk if these systems are more generally intelligent than humans, and possess the capability to modify their own source code"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #2
"ethicalness is probably not something that one can meaningfully tack onto an AGI system at the end, after developing the rest – it is likely infeasible to architect an intelligent agent and then add on an 'ethics module.'"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #3
"AGI rights: in what circumstances does using an AGI as a tool or servant constitute 'slavery'"
Source
HOUSE_OVERSIGHT_013121.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (2,572 characters)

Chapter 12
The Engineering and Development of Ethics
Co-authored with Stephan Vladimir Bugaj and Joel Pitt
12.1 Introduction
Most commonly, if a work on advanced AI mentions ethics at all, it occurs in a final summary chapter, discussing in broad terms some of the possible implications of the technical ideas presented beforehand. It's no coincidence that the order is reversed here: in the case of CogPrime, AGI-ethics considerations played a major role in the design process ... and thus the chapter on ethics occurs near the beginning rather than the end. In the CogPrime approach, ethics is not a particularly distinct topic, being richly interwoven with cognition and education and other aspects of the AGI project.
The ethics of advanced AGI is a complex issue with multiple aspects. Among the many issues there are:
1. Risks posed by the possibility of human beings using AGI systems for evil ends
2. Risks posed by AGI systems created without well-defined ethical systems
3. Risks posed by AGI systems with initially well-defined and sensible ethical systems eventually going rogue – an especially big risk if these systems are more generally intelligent than humans, and possess the capability to modify their own source code
4. the ethics of experimenting on AGI systems when one doesn't understand the nature of their experience
5. AGI rights: in what circumstances does using an AGI as a tool or servant constitute "slavery"
In this chapter we will focus mainly (though not exclusively) on the question of how to create an AGI with a rational and beneficial ethical system. After a somewhat wide-ranging discussion, we will conclude with eight general points that we believe should be followed in working toward "Friendly AGI" – most of which have to do, not with the internal design of the AGI, but with the way the AGI is taught and interfaced with the real world.
While most of the particulars discussed in this book have nothing to do with ethics, it's important for the reader to understand that AGI-ethics considerations have played a major role in many of our design decisions, underlying much of the technical contents of the book. As the materials in this chapter should make clear, ethicalness is probably not something that one can meaningfully tack onto an AGI system at the end, after developing the rest – it is likely infeasible to architect an intelligent agent and then add on an "ethics module." Rather, ethics is something that has to do with all the different memory systems and cognitive processes that
205
HOUSE_OVERSIGHT_013121

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document