HOUSE_OVERSIGHT_012994.jpg

2.39 MB

Extraction Summary

0
People
4
Organizations
0
Locations
0
Events
2
Relationships
2
Quotes

Document Information

Type: Academic/scientific text (book or paper page) included in house oversight investigation
File Size: 2.39 MB
Summary

This document is page 78 from a scientific text regarding Cognitive Architectures and Artificial General Intelligence (AGI). It discusses 'Globalist versus Localist Representations' in AI memory systems, comparing specific systems like OpenCogPrime, DeSTIN, Hopfield neural nets, and Cyc. The page bears a 'HOUSE_OVERSIGHT' stamp, indicating it was part of a document production for a congressional investigation, likely related to Jeffrey Epstein's funding of or interest in scientific research.

Organizations (4)

Name Type Context
OpenCogPrime
Mentioned as a heavily symbolic architecture being hybridized with neural nets.
DeSTIN
Hierarchical attractor neural net based architecture.
Cyc
Cited as an example of a 'localist' system with high memory localization.
House Oversight Committee
Implied by the footer stamp 'HOUSE_OVERSIGHT_012994'.

Relationships (2)

OpenCogPrime Hybridization DeSTIN
exploring the potential for this sort of hybridization between the OpenCogPrime AGI architecture... and... DeSTIN.
Hopfield neural net Comparison Cyc
Hopfield neural net [Ami89] would be considered 'globalist'... whereas Cyc would be considered 'localist'

Key Quotes (2)

"In Chapter 26 of Volume 2 we will give a more concrete idea of what a symmetric high-interaction hybrid neural-symbolic architecture might look like..."
Source
HOUSE_OVERSIGHT_012994.jpg
Quote #1
"CogPrime combines both symbolic and (loosely) neural representations, and also combines globalist and localist representations in a way that we will call 'glocal'..."
Source
HOUSE_OVERSIGHT_012994.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,855 characters)

78 4 Brief Survey of Cognitive Architectures
interacts with other subsystems in the brain much in the manner that the symbolic and neural components of a symmetric high-interaction neural-symbolic system interact.
Neuroscience speculations aside, however, our key conjecture regarding neural-symbolic integration is that this sort of neural-symbolic system presents a promising direction for artificial general intelligence research. In Chapter 26 of Volume 2 we will give a more concrete idea of what a symmetric high-interaction hybrid neural-symbolic architecture might look like, exploring the potential for this sort of hybridization between the OpenCogPrime AGI architecture (which is heavily symbolic in nature) and hierarchical attractor neural net based architectures such as DeSTIN.
4.5 Globalist versus Localist Representations
Another interesting distinction, related to but different from “symbolic versus emergentist” and “neural versus symbolic”, may be drawn between cognitive systems (or subsystems) where memory is essentially global, and those where memory is essentially local. In this section we will pursue this distinction in various guises, along with the less familiar notion of glocal memory.
This globalist/localist distinction is most easily conceptualized by reference to memories corresponding to categories of entities or events in an external environment. In an AI system that has an internal notion of “activation” – i.e. in which some of its internal elements are more active than others, at any given point in time – one can define the internal image of an external event or entity as the fuzzy set of internal elements that tend to be active when that event or entity is presented to the system’s sensors. If one has a particular set S of external entities or events of interest, then, the degree of memory localization of such an AI system relative to S may be conceived as the percentage of the system’s internal elements that have a high degree of membership in the internal image of an average element of S.
Of course, this characterization of localization has its limitations, such as the possibility of ambiguity regarding what are the “system elements” of a given AI system; and the exclusive focus on internal images of external phenomena rather than representation of internal abstract concepts. However, our goal here is not to formulate an ultimate, rigorous and thorough ontology of memory systems, but only to pose a “rough and ready” categorization so as to properly frame our discussion of some specific AGI issues relevant to CogPrime. Clearly the ideas pursued here will benefit from further theoretical exploration and elaboration.
In this sense, a Hopfield neural net [Ami89] would be considered “globalist” since it has a low degree of memory localization (most internal images heavily involve a large number of system elements); whereas Cyc would be considered “localist” as it has a very high degree of memory localization (most internal images are heavily focused on a small set of system elements).
However, although Hopfield nets and Cyc form handy examples, the “globalist vs. localist” distinction as described above is not identical to the “neural vs. symbolic” distinction. For it is in principle quite possible to create localist systems using formal neurons, and also to create globalist systems using formal logic. And “globalist-localist” is not quite identical to “symbolic vs emergentist” either, because the latter is about coordinated system dynamics and behavior not just about knowledge representation. CogPrime combines both symbolic and (loosely) neural representations, and also combines globalist and localist representations in a way that we will call “glocal” and analyze more deeply in Chapter 13; but there are many other ways these various
HOUSE_OVERSIGHT_012994

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document