HOUSE_OVERSIGHT_013069.jpg

2.12 MB

Extraction Summary

3
People
4
Organizations
0
Locations
0
Events
1
Relationships
2
Quotes

Document Information

Type: Academic/technical book page (evidence item)
File Size: 2.12 MB
Summary

This document appears to be page 153 from a technical book or paper regarding Artificial General Intelligence (AGI), specifically discussing 'Cognitive Synergy' within the 'CogPrime' architecture. It details algorithms named MOSES and PLN. The document bears a 'HOUSE_OVERSIGHT_013069' stamp, indicating it was collected as evidence during a House Oversight Committee investigation, likely related to Jeffrey Epstein's funding of or connections to the scientific community and AI research.

People (3)

Name Role Context
Douglas Hofstadter Author/Researcher
Cited for suggesting the metaphor of 'knob twiddling' and 'knob creation' [Hof96].
Jack Hypothetical Example
Used in an analogy about a dog reasoning about children playing.
Jill Hypothetical Example
Used in an analogy about a dog reasoning about children playing.

Organizations (4)

Name Type Context
CogPrime
The primary subject of the text, discussing its design and components.
MOSES
Meta-Optimizing Semantic Evolutionary Search; CogPrime's primary algorithm for learning procedural knowledge.
PLN
Probabilistic Logic Networks; uncertain inference framework mentioned as a component.
House Oversight Committee
Implied by the stamp 'HOUSE_OVERSIGHT_013069', indicating this document is part of a congressional investigation.

Relationships (1)

Douglas Hofstadter Intellectual Influence MOSES (Algorithm)
MOSES learning covers metaphors suggested by Douglas Hofstadter.

Key Quotes (2)

"Following a metaphor suggested by Douglas Hofstadter [Hof96], MOSES learning covers both 'knob twiddling' (setting the values of knobs) and 'knob creation.'"
Source
HOUSE_OVERSIGHT_013069.jpg
Quote #1
"We now present a little more algorithmic detail regarding the operation and synergetic interaction of CogPrime’s two most sophisticated components: the MOSES procedure learning algorithm... and the PLN uncertain inference framework"
Source
HOUSE_OVERSIGHT_013069.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,271 characters)

8.6 Cognitive Synergy for Procedural and Declarative Learning
153
creation can be useful indirectly in calculating these probability estimates, via providing new concepts that can be used to make useful inference trails more compact and hence easier to construct.
– Example: The dog may reason that because Jack likes to play, and Jack and Jill are both children, maybe Jill likes to play too. It can carry out this reasoning only if its concept creation process has invented the concept of “child” via analysis of observed data.
In these examples we have focused on cases where two terms in the cognitive schematic are fixed and the third must be filled in; but just as often, the situation is that only one of the terms is fixed. For instance, if we fix G, sometimes the best approach will be to collectively learn C and P. This requires either a procedure learning method that works interactively with a declarative-knowledge-focused concept learning or reasoning method; or a declarative learning method that works interactively with a procedure learning method. That is, it requires the sort of cognitive synergy built into the CogPrime design.
8.6 Cognitive Synergy for Procedural and Declarative Learning
We now present a little more algorithmic detail regarding the operation and synergetic interaction of CogPrime’s two most sophisticated components: the MOSES procedure learning algorithm (see Chapter 33), and the PLN uncertain inference framework (see Chapter 34). The treatment is necessarily quite compact, since we have not yet reviewed the details of either MOSES or PLN; but as well as illustrating the notion of cognitive synergy more concretely, perhaps the high-level discussion here will make clearer how MOSES and PLN fit into the big picture of CogPrime.
8.6.1 Cognitive Synergy in MOSES
MOSES, CogPrime’s primary algorithm for learning procedural knowledge, has been tested on a variety of application problems including standard GP test problems, virtual agent control, biological data analysis and text classification [Loo06]. It represents procedures internally as program trees. Each node in a MOSES program tree is supplied with a “knob,” comprising a set of values that may potentially be chosen to replace the data item or operator at that node. So for instance a node containing the number 7 may be supplied with a knob that can take on any integer value. A node containing a while loop may be supplied with a knob that can take on various possible control flow operators including conditionals or the identity. A node containing a procedure representing a particular robot movement, may be supplied with a knob that can take on values corresponding to multiple possible movements. Following a metaphor suggested by Douglas Hofstadter [Hof96], MOSES learning covers both “knob twiddling” (setting the values of knobs) and “knob creation.”
MOSES is invoked within CogPrime in a number of ways, but most commonly for finding a procedure P satisfying a probabilistic implication C&P → G as described above, where C is an observed context and G is a system goal. In this case the probability value of the implication provides the “scoring function” that MOSES uses to assess the quality of candidate procedures.
HOUSE_OVERSIGHT_013069

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document