HOUSE_OVERSIGHT_013176.jpg

2.33 MB

Extraction Summary

1
People
2
Organizations
0
Locations
0
Events
1
Relationships
3
Quotes

Document Information

Type: Academic book page / scientific manuscript
File Size: 2.33 MB
Summary

This document is page 260 of a scientific text, likely a book or manuscript regarding Artificial Intelligence and Neuroscience (specifically the 'CogPrime' architecture). It discusses 'Glocal Memory' and compares the CogPrime system to Gerald Edelman's theories of 'Neural Darwinism' and neuronal group selection. The document bears a House Oversight stamp, indicating it was included in evidence files, likely due to Epstein's connections to the scientific community and funding of AI research.

People (1)

Name Role Context
Edelman Scientist/Author
Referenced for his work 'Neural Darwinism' and theories on neuronal group selection.

Organizations (2)

Name Type Context
CogPrime
An Artificial Intelligence architecture discussed in the text, compared to Edelman's biological theories.
House Oversight Committee
Identifier in footer implies this document is part of a congressional investigation.

Relationships (1)

Edelman Academic/Theoretical Author (Unspecified)
Author discusses and critiques Edelman's theories in relation to CogPrime.

Key Quotes (3)

"In Neural Darwinism and his other related books and papers, Edelman goes far beyond this crude sketch and presents neuronal group selection as a collection of precise biological hypotheses..."
Source
HOUSE_OVERSIGHT_013176.jpg
Quote #1
"CogPrime does not have simulated biological neurons and synapses, but it does have Nodes and Links that in some contexts play loosely similar roles."
Source
HOUSE_OVERSIGHT_013176.jpg
Quote #2
"A glocal memory is one that transcends the global/local dichotomy and incorporates both aspects in a tightly interconnected way."
Source
HOUSE_OVERSIGHT_013176.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,715 characters)

260 13 Local, Global and Glocal Knowledge Representation
cated acts of perception. In general-evolution language, what is posited here is that organisms
like humans contain chemical signals that signify organism-level success of various types, and
that these signals serve as a "fitness function" correlating with evolutionary fitness of neuronal
maps.
In Neural Darwinism and his other related books and papers, Edelman goes far beyond this
crude sketch and presents neuronal group selection as a collection of precise biological hypothe-
ses, and presents evidence in favor of a number of these hypotheses. However, we consider that
the basic concept of neuronal group selection is largely independent of the biological particular-
ities in terms of which Edelman has phrased it. We suspect that the mutation and selection of
"transformations" or "maps" is a necessary component of the dynamics of any intelligent system.
As we will see later on (e.g. in Chapter 42 of Part 2, this business of maps is extremely
important to CogPrime. CogPrime does not have simulated biological neurons and synapses,
but it does have Nodes and Links that in some contexts play loosely similar roles. We sometimes
think of CogPrime Nodes and Links as being very roughly analogous to Edelman's neuronal
clusters, and emergent intercluster links. And we have maps among CogPrime Nodes and Links,
just as Edelman has maps among his neuronal clusters. Maps are not the sole bearers of meaning
in CogPrime, but they are significant ones.
There is a very natural connection between Edelman-style brain evolution and the ideas
about cognitive evolution presented in Chapter 3. Edelman proposes a fairly clear mechanism
via which patterns that survive a while in the brain are differentially likely to survive a long
time: this is basic Hebbian learning, which in Edelman's picture plays a role between neuronal
groups. And, less directly, Edelman's perspective also provides a mechanism by which intense
patterns will be differentially selected in the brain: because on the level of neural maps, pattern
intensity corresponds to the combination of compactness and functionality. Among a number
of roughly equally useful maps serving the same function, the more compact one will be more
likely to survive over time, because it is less likely to be disrupted by other brain processes
(such as other neural maps seeking to absorb its component neuronal groups into themselves).
Edelman's neuroscience remains speculative, since so much remains unknown about human
neural structure and dynamics; but it does provide a tentative and plausible connection between
evolutionary neurodynamics and the more abstract sort of evolution that patternist philosophy
posits to occur in the realm of mind-patterns.
13.6 Glocal Memory
A glocal memory is one that transcends the global/local dichotomy and incorporates both
aspects in a tightly interconnected way. Here we make the glocal memory concept more precise,
and describe its incarnation in the context of attractor neural nets (which is similar to its
incarnation in CogPrime, to be elaborated in later chapters). Though our main interest here is
in glocality in CogPrime, we also suggest that glocality may be a critical property to consider
when analyzing human, animal and AI memory more broadly.
The notion of glocal memory has implicitly occurred in a number of prior brain theories
(without use of the neologism "glocal"), e.g. [Cal96] and [Goe01], but it has not previously been
explicitly developed. However the concept has risen to the fore in our recent AI work and so
we have chosen to flesh it out more fully in [HG08], [GPI+10] and the present section.
HOUSE_OVERSIGHT_013176

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document