HOUSE_OVERSIGHT_013157.jpg

2.3 MB

Extraction Summary

1
People
2
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic paper / book excerpt (house oversight evidence)
File Size: 2.3 MB
Summary

This document appears to be a page from a technical paper or book regarding Artificial General Intelligence (AGI) ethics and development strategies. It argues for the co-advancement of software and ethics and suggests that starting AGI development sooner rather than later is safer to prevent dangerous outpacing of ethical theory. It references the 'CogPrime project' and bears a 'HOUSE_OVERSIGHT' footer, indicating it was produced as evidence in a congressional investigation (likely related to Jeffrey Epstein's funding of AI researchers).

People (1)

Name Role Context
The Authors Researchers/Writers
Self-referenced in the text as working on the CogPrime project.

Organizations (2)

Name Type Context
CogPrime project
An AGI development project mentioned as the authors' current work.
House Oversight Committee
Inferred from the footer stamp 'HOUSE_OVERSIGHT'.

Key Quotes (3)

"Somewhat ironically, it seems the best way to ensure that AGI development proceeds at a relatively measured pace is to initiate serious AGI development sooner rather than later."
Source
HOUSE_OVERSIGHT_013157.jpg
Quote #1
"Of course, the authors are doing their best in this direction via their work on the CogPrime project!"
Source
HOUSE_OVERSIGHT_013157.jpg
Quote #2
"We really want both deep-sharing GB technology and AGI technology to evolve relatively rapidly, compared to computing hardware and advanced CS algorithms"
Source
HOUSE_OVERSIGHT_013157.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,556 characters)

12.10 Conclusion: Eight Ways to Bias AGI Toward Friendliness
241
12.10.1 Encourage Measured Co-Advancement of AGI Software and AGI Ethics Theory
Everything involving AGI and Friendly AI (considered together or separately) currently involves significant uncertainty, and it seems likely that significant revision of current concepts will be valuable, as progress on the path toward powerful AGI proceeds. However, whether there is time for such revision to occur before AGI at the human level or above is created, depends on how fast is our progress toward AGI. What one wants is for progress to be slow enough that, at each stage of intelligence advance, concepts such as those discussed in this paper can be re-evaluated and re-analyzed in the light of the data gathered, and AGI designs and approaches can be revised accordingly as necessary.
However, due to the nature of modern technology development, it seems extremely unlikely that AGI development is going to be artificially slowed down in order to enable measured development of accompanying ethical tools, practices and understandings. For example, if one nation chose to enforce such a slowdown as a matter of policy (speaking about a future date at which substantial AGI progress has already been demonstrated, so that international AGI funding is dramatically increased from present levels), the odds seem very high that other nations would explicitly seek to accelerate their own progress on AGI, so as to reap the ensuing differential economic benefits (the example of stem cells arises again).
And this leads on to our next and final point regarding strategy for biasing AGI toward Friendliness....
12.10.2 Develop Advanced AGI Sooner Not Later
Somewhat ironically, it seems the best way to ensure that AGI development proceeds at a relatively measured pace is to initiate serious AGI development sooner rather than later. This is because the same AGI concepts will meet slower practical development today than 10 years from now, and slower 10 years from now than 20 years from now, etc. – due to the ongoing rapid advancement of various tools related to AGI development, such as computer hardware, programming languages, and computer science algorithms; and also the ongoing global advancement of education which makes it increasingly cost-effective to recruit suitably knowledgeable AI developers.
Currently the pace of AGI progress is sufficiently slow that practical work is in no danger of outpacing associated ethical theorizing. However, if we want to avoid the future occurrence of this sort of dangerous outpacing, our best practical choice is to make sure more substantial AGI development occurs in the phase before the development of tools that will make AGI development extraordinarily rapid. Of course, the authors are doing their best in this direction via their work on the CogPrime project!
Furthermore, this point bears connecting with the need, raised above, to foster the development of Global Brain technologies capable to "Foster Deep, Consensus-Building Interactions Between People with Divergent Views." If this sort of technology is to be maximally valuable, it should be created quickly enough that we can use it to help shape the goal system content of the first highly powerful AGIs. So, to simplify just a bit: We really want both deep-sharing GB technology and AGI technology to evolve relatively rapidly, compared to computing hardware and advanced CS algorithms (since the latter factors will be the main drivers behind the ac-
HOUSE_OVERSIGHT_013157

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document