9.6 Body and Mind 175
9.6.2.2 Implications for AGI
What lesson should the AGI developer draw from all this? The particularities of the human
mind/body should not be taken as general requirements for general intelligence. However, it
is worth remembering just how difficult is the computational problem of learning, based on
experiential feedback alone, the right way to achieve the complex goal of controlling a system
with general intelligence at the human level or beyond. To solve this problem without some sort
of strong inductive biasing may require massively more experience than young humans obtain.
Appropriate inductive bias may be embedded in an AGI system in many different ways.
Some AGI designers have sought to embed it very explicitly, e.g. with hand-coded declarative
knowledge as in Cyc, SOAR and other "GOFAI" type systems. On the other hand, the human
brain receives its inductive bias much more subtly and implicitly, both via the specifics of the
initial structure of the cognitive cortex, and via ongoing coupling of the cognitive cortex with
other systems possessing more focused types of intelligence and more specific structures and/or
dynamics.
In building an AGI system, one has four choices, very broadly speaking:
1. Create a flexible mind-network, as unbiased as feasible, and attempt to have it learn how
to achieve its goals via experience
2. Closely emulate key aspects of the human body along with the human mind
3. Imitate the human mind-body, conceptually if not in detail, and create a number of struc-
turally and dynamically simpler intelligent systems closely and appropriately coupled to
the abstract cognitive mind-network, provide useful inductive bias.
4. Find some other, creative way to guide and probabilistically constrain one's AGI system's
mind-network, providing inductive bias appropriate to the tasks at hand, without emulating
even conceptually the way the human mind-brain receives its inductive bias via coupling
with simpler intelligent systems.
Our suspicion is that the first option will not be viable. On the other hand, to do the second
option would require more knowledge of the human body than biology currently possesses. This
leaves the third and fourth options, both of which seem viable to us.
CogPrime incorporates a combination of the third and fourth options. CogPrime's generic
dynamic knowledge store, the Atomspace, is coupled with specialized hierarchical networks
(DeSTIN) for vision and audition, somewhat mirroring the human cortex. An artificial en-
docrine system for OpenCog is also under development, speculatively, as part of a project using
OpenCog to control humanoid robots. On the other hand, OpenCog has no gastrointestinal nor
cardiological nervous system, and the stress-response-based guidance provided to the human
brain by a combination of the heart, gut, immune system and other body systems, is achieved
in CogPrime in a more explicit way using the OpenPsi model of motivated cognition, and its
integration with the system's attention allocation dynamics.
Likely there is no single correct way to incorporate the lessons of intelligent human body-
system networks into AGI designs. But these are aspects of human cognition that all AGI
researchers should be aware of.
HOUSE_OVERSIGHT_013091
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document