HOUSE_OVERSIGHT_013027.jpg

2.4 MB

Extraction Summary

1
People
3
Organizations
0
Locations
0
Events
0
Relationships
2
Quotes

Document Information

Type: Technical document / academic book excerpt (evidence)
File Size: 2.4 MB
Summary

This document is page 111 of a technical academic text discussing Artificial General Intelligence (AGI), specifically the 'CogPrime' and 'OpenCog' architectures. It details memory types, cognitive processes, and 'cognitive synergy' within the Probabilistic Logic Networks (PLN) framework. While technical in nature, the document bears the Bates stamp 'HOUSE_OVERSIGHT_013027', indicating it was collected as evidence during a US House Oversight Committee investigation, likely related to Jeffrey Epstein's funding of or connections to scientific research and AI projects.

People (1)

Name Role Context
Hutter Researcher
Cited in the text regarding AGI efficiency theory.

Organizations (3)

Name Type Context
OpenCog
AI project discussed in the text.
CogPrime
Specific AI architecture discussed.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT' at the bottom of the page.

Key Quotes (2)

"efficiency is not a side-issue but rather the essence of real-world AGI"
Source
HOUSE_OVERSIGHT_013027.jpg
Quote #1
"when a learning process concerned centrally with one type of memory encounters a situation where it learns very slowly, it can often resolve the issue by converting some of the relevant knowledge into a different type of memory: i.e. cognitive synergy"
Source
HOUSE_OVERSIGHT_013027.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,781 characters)

6.4 Memory Types and Associated Cognitive Processes in CogPrime 111
CogPrime's memory types are the declarative, procedural, sensory, and episodic memory types that are widely discussed in cognitive neuroscience [TC05], plus attentional memory for allocating system resources generically, and intentional memory for allocating system resources in a goal-directed way. Table 6.2 overviews these memory types, giving key references and indicating the corresponding cognitive processes, and also indicating which of the generic patternist cognitive dynamics each cognitive process corresponds to (pattern creation, association, etc.). Figure 6.7 illustrates the relationships between several of the key memory types in the context of a simple situation involving an OpenCogPrime-controlled agent in a virtual world.
In terms of patternist cognitive theory, the multiple types of memory in CogPrime should be considered as specialized ways of storing particular types of patterns, optimized for spacetime efficiency. The cognitive processes associated with a certain type of memory deal with creating and recognizing patterns of the type for which the memory is specialized. While in principle all the different sorts of pattern could be handled in a unified memory and processing architecture, the sort of specialization used in CogPrime is necessary in order to achieve acceptable efficient general intelligence using currently available computational resources. And as we have argued in detail in Chapter 7, efficiency is not a side-issue but rather the essence of real-world AGI (since as Hutter has shown, if one casts efficiency aside, arbitrary levels of general intelligence can be achieved via a trivially simple program).
The essence of the CogPrime design lies in the way the structures and processes associated with each type of memory are designed to work together in a closely coupled way, yielding cooperative intelligence going beyond what could be achieved by an architecture merely containing the same structures and processes in separate "black boxes."
The inter-cognitive-process interactions in OpenCog are designed so that
• conversion between different types of memory is possible, though sometimes computationally costly (e.g. an item of declarative knowledge may with some effort be interpreted procedurally or episodically, etc.)
• when a learning process concerned centrally with one type of memory encounters a situation where it learns very slowly, it can often resolve the issue by converting some of the relevant knowledge into a different type of memory: i.e. **cognitive synergy**
6.4.1 Cognitive Synergy in PLN
To put a little meat on the bones of the "cognitive synergy" idea, discussed repeatedly in prior chapters and more extensively in latter chapters, we now elaborate a little on the role it plays in the interaction between procedural and declarative learning.
While MOSES handles much of CogPrime's procedural learning, and CogPrime's internal simulation engine handles most episodic knowledge, CogPrime's primary tool for handling declarative knowledge is an uncertain inference framework called Probabilistic Logic Networks (PLN). The complexities of PLN are the topic of a lengthy technical monograph [GMIH08], and are summarized in Chapter 34; here we will eschew most details and focus mainly on pointing out how PLN seeks to achieve efficient inference control via integration with other cognitive processes.
As a logic, PLN is broadly integrative: it combines certain term logic rules with more standard predicate logic rules, and utilizes both fuzzy truth values and a variant of imprecise probabilities called indefinite probabilities. PLN mathematics tells how these uncertain truth values propagate
HOUSE_OVERSIGHT_013027

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document