HOUSE_OVERSIGHT_013071.jpg

2.15 MB

Extraction Summary

0
People
3
Organizations
0
Locations
0
Events
1
Relationships
2
Quotes

Document Information

Type: Academic/technical publication page (evidence item)
File Size: 2.15 MB
Summary

This document is page 155 of a technical publication regarding Artificial Intelligence, specifically discussing 'Cognitive Synergy for Procedural and Declarative Learning.' It details the integration of the MOSES algorithm with Probabilistic Logic Networks (PLN) within the CogPrime architecture to create human-level AI. The document bears a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was likely included in a document production to the US House Oversight Committee, possibly related to investigations into scientific funding or connections to Jeffrey Epstein, though no individuals are named on this specific page.

Relationships (1)

MOSES (Algorithm) Synergistic Integration PLN (Probabilistic Logic Networks)
consulting PLN inference to help estimate which collections of knob settings will work best

Key Quotes (2)

"MOSES is a powerful procedure learning algorithm, but used on its own it runs into scalability problems like any other such algorithm; the reason we feel it has potential to play a major role in a human-level AI system is its capacity for productive interoperation with other cognitive components."
Source
HOUSE_OVERSIGHT_013071.jpg
Quote #1
"Cross-process and cross-memory-type integration make it tractable for MOSES to act as a 'transfer learning' algorithm, not just a task-specific machine-learning algorithm."
Source
HOUSE_OVERSIGHT_013071.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,289 characters)

8.6 Cognitive Synergy for Procedural and Declarative Learning
155
a. Select some promising programs from the deme's existing sample to use for modeling, according to the scoring function.
b. Considering the promising programs as collections of knob settings, generate new collections of knob settings by applying some (competent) optimization algorithm. For best performance on difficult problems, it is important to use an optimization algorithm that makes use of the system's memory in its choices, consulting PLN inference to help estimate which collections of knob settings will work best.
c. Convert the new collections of knob settings into their corresponding programs, reduce the programs to normal form, evaluate their scores, and integrate them into the deme's sample, replacing less promising programs. In the case that scoring is expensive, score evaluation may be preceded by score estimation, which may use PLN inference, enaction of procedures in an internal simulation environment, and/or similarity matching against episodic memory.
3. For each new program that meet the criterion for creating a new deme, if any:
a. Construct a new set of knobs (a process called "representation-building") to define a region centered around the program (the deme’s exemplar), and use it to generate a new random sampling of programs, producing a new deme.
b. Integrate the new deme into the metapopulation, possibly displacing less promising demes.
4. Repeat from step 2.
MOSES is a complex algorithm and each part plays its role; if any one part is removed the performance suffers significantly [Loo06]. However, the main point we want to highlight here is the role played by synergetic interactions between MOSES and other cognitive components such as PLN, simulation and episodic memory, as indicated in boldface in the above pseudocode. MOSES is a powerful procedure learning algorithm, but used on its own it runs into scalability problems like any other such algorithm; the reason we feel it has potential to play a major role in a human-level AI system is its capacity for productive interoperation with other cognitive components.
Continuing the "tag" example, the power of MOSES's integration with other cognitive processes would come into play if, before learning to play tag, the robot has already played simpler games involving chasing. If the robot already has experience chasing and being chased by other agents, then its episodic and declarative memory will contain knowledge about how to pursue and avoid other agents in the context of running around an environment full of objects, and this knowledge will be deployable within the appropriate parts of MOSES's Steps 1 and 2. Cross-process and cross-memory-type integration make it tractable for MOSES to act as a "transfer learning" algorithm, not just a task-specific machine-learning algorithm.
8.6.2 Cognitive Synergy in PLN
While MOSES handles much of CogPrime's procedural learning, and OpenCogPrimes internal simulation engine handles most episodic knowledge, CogPrime's primary tool for handling declarative knowledge is an uncertain inference framework called Probabilistic Logic Networks (PLN). The complexities of PLN are the topic of a lengthy technical monograph [GMIH08], and
HOUSE_OVERSIGHT_013071

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document