13.6 Glocal Memory 269
The result of these modifications to the ordinary Hopfield net, is a Hopfield net that continually maintains a set of key neurons, each of which individually represents a certain attractor of the net.
Note that these key neurons – in spite of being “symbolic” in nature – are learned rather than preprogrammed, and are every bit as adaptive as the attractors they correspond to. Furthermore, if a key neuron is removed, the glocal Hopfield net algorithm will eventually learn it back, so the robustness properties of Hopfield nets are retained.
The results of experimenting with glocal Hopfield nets of this nature are summarized in [GPI+10]. We studied Hopfield nets with connectivity around .1, and in this context we found that glocality
• slightly increased memory capacity
• massively increased the rate of convergence to the attractor, i.e. the speed of recall
However, probably the most important consequence of glocality is a more qualitative one: it makes it far easier to link the Hopfield net into a larger system, as would occur if the Hopfield net were embedded in an integrative AGI architecture. Because a neuron external to the Hopfield net may now link to a memory in the Hopfield net by linking to the corresponding key neuron.
13.6.4 Neural-Symbolic Glocality in CogPrime
In CogPrime, we have explicitly sought to span the symbolic/emergentist pseudo-dichotomy, via creating an integrative knowledge representation that combines logic-based aspects with neural-net-like aspects. As reviewed in Chapter 6 above, these function not in the manner of multimodular systems, but rather via using (probabilistic) truth values and (attractor neural net like) attention values as weights on nodes and links of the same (hyper) graph. The nodes and links in this hypergraph are typed, like a standard semantic network approach for knowledge representation, so they’re able to handle all sorts of knowledge, from the most concrete perception and actuation related knowledge to the most abstract relationships. But they’re also weighted with values similar to neural net weights, and pass around quantities (importance values, discussed in Chapter 23 of Part 2) similar to neural net activations, allowing emergent attractor/assembly based knowledge representation similar to attractor neural nets.
The concept of glocality lies at the heart of this combination, in a way that spans the pseudo-dichotomy:
• Local knowledge is represented in abstract logical relationships stored in explicit logical form, and also in Hebbian-type associations between nodes and links.
• Global knowledge is represented in large-scale patterns of node and link weights, which lead to large-scale patterns of network activity, which often take the form of attractors qualitatively similar to Hopfield net attractors. These attractors are called maps.
The result of all this is that a concept like “cat” might be represented as a combination of:
• A small number of logical relationships and strong associations, that constitute the “key” subnetwork for the “cat” concept.
• A large network of weak associations, binding together various nodes and links of various types and various levels of abstraction, representing the “cat map”.
HOUSE_OVERSIGHT_013185
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document