HOUSE_OVERSIGHT_013234.jpg

2.1 MB

Extraction Summary

0
People
5
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic text / book page (evidence file)
File Size: 2.1 MB
Summary

This document is page 318 from a technical book or academic paper titled '18 Advanced Self-Modification: A Possible Path to Superhuman AGI'. It discusses theoretical concepts of Artificial General Intelligence (AGI), specifically focusing on 'CogPrime,' 'MindAgents,' and the ethics of self-modifying AI systems. The page bears the Bates stamp 'HOUSE_OVERSIGHT_013234', indicating it was produced as part of a House Oversight Committee investigation, likely related to Jeffrey Epstein's funding of scientific research and AI projects.

Organizations (5)

Name Type Context
House Oversight Committee
Document marked with HOUSE_OVERSIGHT bates stamp
CogPrime
Subject of the text regarding AGI development
MindAgents
Component of the CogPrime system mentioned in text
MOSES
Mentioned in context of learning algorithms
AtomSpace
Mentioned in context of cognitive architecture

Key Quotes (3)

"In a sense, all learning is self-modification: if it doesn't modify the system's knowledge, it isn't learning!"
Source
HOUSE_OVERSIGHT_013234.jpg
Quote #1
"It takes a rather advanced AGI system to be able to use the capabilities described in this chapter, so this is not an ethical dilemma directly faced by current AGI researchers."
Source
HOUSE_OVERSIGHT_013234.jpg
Quote #2
"CogPrime's MindAgents provide it with an initial set of cognitive tools, with which it can learn how to interact in the world."
Source
HOUSE_OVERSIGHT_013234.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,343 characters)

318 18 Advanced Self-Modification: A Possible Path to Superhuman AGI
only after these have achieved something close to human-level general intelligence (even if not
precisely humanlike general intelligence).
Another troublesome issue regarding self-modification is that the boundary between "self-
modification" and learning is not terribly rigid. In a sense, all learning is self-modification: if
it doesn't modify the system's knowledge, it isn't learning! Particularly, the boundary between
"learning of cognitive procedures" and "profound self-modification of cognitive dynamics and
structure" isn't terribly clear. There is a continuum leading from, say,
1. learning to transform a certain kind of sentence into another kind for easier comprehension,
or learning to grasp a certain kind of object, to
2. learning a new inference control heuristic, specifically valuable for controlling inference
about (say) spatial relationships; or, learning a new Atom type, defined as a non-obvious
judiciously chosen combination of existing ones, perhaps to represent a particular kind of
frequently-occurring mid-level perceptual knowledge, to
3. learning a new learning algorithm to augment MOSES and hillclimbing as a procedure
learning algorithm, to
4. learning a new cognitive architecture in which data and procedure are explicitly identical,
and there is just one new active data structure in place of the distinction between AtomSpace
and MindAgents
Where on this continuum does the "mere learning" end and the "real self-modification"
start?
In this chapter we consider some mechanisms for "advanced self-modification" that we believe
will be useful toward the more complex end of this continuum. These are mechanisms that we
strongly suspect are not needed to get a CogPrime system to human-level general intelligence.
However, we also suspect that, once a CogPrime system is roughly near human-level general
intelligence, it will be able to use these mechanisms to rapidly increase aspects of its intelligence
in very interesting ways.
Harking back to our discussion of AGI ethics and the risks of advanced AGI in Chapter 12,
these are capabilities that one should enable in an AGI system only after very careful reflection
on the potential consequences. It takes a rather advanced AGI system to be able to use the
capabilities described in this chapter, so this is not an ethical dilemma directly faced by current
AGI researchers. On the other hand, once one does have an AGI with near-human general
intelligence and advanced formal-manipulation capabilities (such as an advanced CogPrime
system), there will be the option to allow it sophisticated, non-human-like methods of self-
modification such as the ones described here. And the choice of whether to take this option will
need to be made based on a host of complex ethical considerations, some of which we reviewed
above.
18.2 Cognitive Schema Learning
We begin with a relatively near-term, down-to-earth example of self-modification: cognitive
schema learning.
CogPrime's MindAgents provide it with an initial set of cognitive tools, with which it can
learn how to interact in the world. One of the jobs of this initial set of cognitive tools, however,
is to create better cognitive tools. One form this sort of tool-building may take is cognitive
HOUSE_OVERSIGHT_013234

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document