HOUSE_OVERSIGHT_013075.jpg

2.22 MB

Extraction Summary

0
People
2
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic literature / book page
File Size: 2.22 MB
Summary

This document is page 159 from a scientific text regarding Artificial General Intelligence (AGI). It discusses the concept of 'tricky cognitive synergy,' arguing that creating components for human-level AGI is inherently more difficult than creating narrow AI, and that a lack of intermediate progress results does not necessarily indicate failure. The document mentions 'CogPrime,' an AI architecture, and bears the stamp 'HOUSE_OVERSIGHT_013075,' indicating it was part of documents reviewed by the House Oversight Committee, likely in relation to Jeffrey Epstein's funding of or interest in scientific research.

Organizations (2)

Name Type Context
CogPrime
AGI community

Key Quotes (3)

"The tricky cognitive synergy hypothesis would be true if, for example... creating components to serve as parts of a synergetic AGI is harder than creating components intended to serve as parts of simpler AI systems without synergetic dynamics"
Source
HOUSE_OVERSIGHT_013075.jpg
Quote #1
"In a CogPrime context, these possibilities ring true, in the sense that tailoring an AI process for tight integration with other AI processes within CogPrime, tends to require more work than preparing a conceptually similar AI process for use on its own"
Source
HOUSE_OVERSIGHT_013075.jpg
Quote #2
"Lack of impressive intermediary results may not imply one is on a wrong development path; and comparison with narrow AI systems on specific tasks may be badly misleading as a gauge of incremental progress toward human-level AGI."
Source
HOUSE_OVERSIGHT_013075.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,591 characters)

8.7 Is Cognitive Synergy Tricky? 159
cognitive synergy hypothesis becomes complex, but here we're only aiming for a qualitative
argument. So for illustrative purposes, we'll stick with the "10 components" example, just for
communicative simplicity.
Next, let's suppose that for any given task, there are ways to achieve this task using a system
that is much simpler than any subset of size 6 drawn from the set of 10 components needed
for human-level AGI, but works much better for the task than this subset of 6 components
(assuming the latter are used as a set of only 6 components, without the other 4 components).
Note that this supposition is a good bit stronger than mere cognitive synergy. For lack of
a better name, we'll call it tricky cognitive synergy. The tricky cognitive synergy hypothesis
would be true if, for example, the following possibilities were true:
• creating components to serve as parts of a synergetic AGI is harder than creating compo-
nents intended to serve as parts of simpler AI systems without synergetic dynamics
• components capable of serving as parts of a synergetic AGI are necessarily more complicated
than components intended to serve as parts of simpler AGI systems.
These certainly seem reasonable possibilities, since to serve as a component of a synergetic AGI
system, a component must have the internal flexibility to usefully handle interactions with a lot
of other components as well as to solve the problems that come its way. In a CogPrime context,
these possibilities ring true, in the sense that tailoring an AI process for tight integration with
other AI processes within CogPrime, tends to require more work than preparing a conceptually
similar AI process for use on its own or in a more task-specific narrow AI system.
It seems fairly obvious that, if tricky cognitive synergy really holds up as a property of
human-level general intelligence, the difficulty of formulating tests for intermediate progress
toward human-level AGI follows as a consequence. Because, according to the tricky cognitive
synergy hypothesis, any test is going to be more easily solved by some simpler narrow AI process
than by a partially complete human-level AGI system.
8.7.3 Conclusion
We haven't proved anything here, only made some qualitative arguments. However, these argu-
ments do seem to give a plausible explanation for the empirical observation that positing tests
for intermediate progress toward human-level AGI is a very difficult prospect. If the theoret-
ical notions sketched here are correct, then this difficulty is not due to incompetence or lack
of imagination on the part of the AGI community, nor due to the primitive state of the AGI
field, but is rather intrinsic to the subject matter. And if these notions are correct, then quite
likely the future rigorous science of AGI will contain formal theorems echoing and improving
the qualitative observations and conjectures we've made here.
If the ideas sketched here are true, then the practical consequence for AGI development
is, very simply, that one shouldn't worry a lot about producing intermediary results that are
compelling to skeptical observers. Just at 2/3 of a human brain may not be of much use,
similarly, 2/3 of an AGI system may not be much use. Lack of impressive intermediary results
may not imply one is on a wrong development path; and comparison with narrow AI systems on
specific tasks may be badly misleading as a gauge of incremental progress toward human-level
AGI.
HOUSE_OVERSIGHT_013075

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document