HOUSE_OVERSIGHT_013074.jpg

2.25 MB

Extraction Summary

2
People
2
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic book/paper excerpt (evidence file)
File Size: 2.25 MB
Summary

A page from an academic text (page 158) discussing Artificial General Intelligence (AGI). It explores benchmarks for AI, such as the 'Wozniak coffee test,' and discusses the difficulty of measuring progress toward AGI. The text proposes 'Cognitive Synergy' as a hypothesis for why intermediate testing is difficult, suggesting that AGI requires the interaction of multiple components (citing CogPrime) rather than isolated skills. The document bears a House Oversight footer.

People (2)

Name Role Context
Wozniak Tech Figure (Referenced)
Referenced in the context of the 'Wozniak coffee test' for AGI.
AGI Researchers Group
Mentioned as having differing perspectives on AGI testing.

Organizations (2)

Name Type Context
CogPrime
An AGI architecture mentioned as an example of the cognitive synergy hypothesis.
House Oversight Committee
Implied by the footer stamp 'HOUSE_OVERSIGHT_013074'.

Key Quotes (3)

"The Wozniak "coffee test": go into an average American house and figure out how to make coffee..."
Source
HOUSE_OVERSIGHT_013074.jpg
Quote #1
"The cognitive synergy hypothesis, in its simplest form, states that human-level AGI intrinsically depends on the synergetic interaction of multiple components"
Source
HOUSE_OVERSIGHT_013074.jpg
Quote #2
"Why might a solid, objective empirical test for intermediate progress toward AGI be an infeasible notion? One possible reason, we suggest, is precisely cognitive synergy"
Source
HOUSE_OVERSIGHT_013074.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,547 characters)

158 8 Cognitive Synergy
• The Wozniak "coffee test": go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.
• Story understanding – reading a story, or watching it on video, and then answering questions about what happened (including questions at various levels of abstraction)
• Graduating (virtual-world or robotic) preschool
• Passing the elementary school reading curriculum (which involves reading and answering questions about some picture books as well as purely textual ones)
• Learning to play an arbitrary video game based on experience only, or based on experience plus reading instructions
One interesting point about tests like this is that each of them seems to some AGI researchers to encapsulate the crux of the AGI problem, and be unsolvable by any system not far along the path to human-level AGI – yet seems to other AGI researchers, with different conceptual perspectives, to be something probably game-able by narrow-AI methods. And of course, given the current state of science, there’s no way to tell which of these practical tests really can be solved via a narrow-AI approach, except by having a lot of people try really hard over a long period of time.
A question raised by these observations is whether there is some fundamental reason why it’s hard to make an objective, theory-independent measure of intermediate progress toward advanced AGI. Is it just that we haven’t been smart enough to figure out the right test – or is there some conceptual reason why the very notion of such a test is problematic?
We don’t claim to know for sure – but in the rest of this section we’ll outline one possible reason why the latter might be the case.
8.7.2 A Possible Answer: Cognitive Synergy is Tricky!
Why might a solid, objective empirical test for intermediate progress toward AGI be an infeasible notion? One possible reason, we suggest, is precisely cognitive synergy, as discussed above.
The cognitive synergy hypothesis, in its simplest form, states that human-level AGI intrinsically depends on the synergetic interaction of multiple components (for instance, as in CogPrime, multiple memory systems each supplied with its own learning process). In this hypothesis, for instance, it might be that there are 10 critical components required for a human-level AGI system. Having all 10 of them in place results in human-level AGI, but having only 8 of them in place results in having a dramatically impaired system – and maybe having only 6 or 7 of them in place results in a system that can hardly do anything at all.
Of course, the reality is almost surely not as strict as the simplified example in the above paragraph suggests. No AGI theorist has really posited a list of 10 crisply-defined subsystems and claimed them necessary and sufficient for AGI. We suspect there are many different routes to AGI, involving integration of different sorts of subsystems. However, if the cognitive synergy hypothesis is correct, then human-level AGI behaves roughly like the simplistic example in the prior paragraph suggests. Perhaps instead of using the 10 components, you could achieve human-level AGI with 7 components, but having only 5 of these 7 would yield drastically impaired functionality – etc. Or the point could be made without any decomposition into a finite set of components, using continuous probability distributions. To mathematically formalize the
HOUSE_OVERSIGHT_013074

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document