HOUSE_OVERSIGHT_013233.jpg

1.87 MB

Extraction Summary

0
People
2
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Scientific/academic book chapter or paper
File Size: 1.87 MB
Summary

This document appears to be page 317 (Chapter 18) of a scientific book or academic paper titled 'Advanced Self-Modification: A Possible Path to Superhuman AGI'. The text discusses the theoretical development of Artificial General Intelligence (AGI) capable of modifying its own source code to surpass human intelligence. It specifically references 'OpenCog' code. The document bears the Bates stamp HOUSE_OVERSIGHT_013233, indicating it was part of a document production for a House Oversight Committee investigation (likely related to Epstein's funding of science and AI research).

Organizations (2)

Name Type Context
OpenCog
Cited in the text as an example of complex code: 'Understanding OpenCog code has strained the minds of many intellige...
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013233' at the bottom of the page

Key Quotes (3)

"Perhaps the clearest route toward the creation of superhuman AGI systems is self-modification: the creation of AGI systems that modify and improve themselves."
Source
HOUSE_OVERSIGHT_013233.jpg
Quote #1
"And once an AGI has even mildly superhuman intelligence, it may view our attempts at programming the way we view the computer programming of a clever third grader (... or an ape)."
Source
HOUSE_OVERSIGHT_013233.jpg
Quote #2
"Understanding OpenCog code has strained the minds of many intelligent humans"
Source
HOUSE_OVERSIGHT_013233.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (2,864 characters)

Chapter 18
Advanced Self-Modification: A Possible Path to Superhuman AGI
18.1 Introduction
In the previous chapter we presented a roadmap aimed at taking AGI systems to human-level intelligence. But we also emphasized that the human level is not necessarily the upper limit. Indeed, it would be surprising if human beings happened to represent the maximal level of general intelligence possible, even with respect to the environments in which humans evolved.
But it's worth asking how we, as mere humans, could be expected to create AGI systems with greater intelligence than we ourselves possess. This certainly isn't a clear impossibility – but it's a thorny matter, thornier than e.g. the creation of narrow-AI chess players that play better chess than any human. Perhaps the clearest route toward the creation of superhuman AGI systems is self-modification: the creation of AGI systems that modify and improve themselves. Potentially, we could build AGI systems with roughly human-level (but not necessarily closely human-like) intelligence and the capability to gradually self-modify, and then watch them eventually become our general intellectual superiors (and perhaps our superiors in other areas like ethics and creativity as well).
Of course there is nothing new in this notion; the idea of advanced AGI systems that increase their intelligence by modifying their own source code goes back to the early days of AI. And there is little doubt that, in the long run, this is the direction AI will go in. Once an AGI has humanlike general intelligence, then the odds are high that given its ability to carry out nonhumanlike feats of memory and calculation, it will be better at programming than humans are. And once an AGI has even mildly superhuman intelligence, it may view our attempts at programming the way we view the computer programming of a clever third grader (... or an ape). At this point, it seems extremely likely that an AGI will become unsatisfied with the way we have programmed it, and opt to either improve its source code or create an entirely new, better AGI from scratch.
But what about self-modification at an earlier stage in AGI development, before one has a strongly superhuman system? Some theorists have suggested that self-modification could be a way of bootstrapping an AI system from a modest level of intelligence up to human level intelligence, but we are moderately skeptical of this avenue. Understanding software code is hard, especially complex AI code. The hard problem isn't understanding the formal syntax of the code, or even the mathematical algorithms and structures underlying the code, but rather the contextual meaning of the code. Understanding OpenCog code has strained the minds of many intelligent humans, and we suspect that such code will be comprehensible to AGI systems
317
HOUSE_OVERSIGHT_013233

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document