HOUSE_OVERSIGHT_013149.jpg

2.31 MB

Extraction Summary

1
People
2
Organizations
0
Locations
0
Events
1
Relationships
3
Quotes

Document Information

Type: Academic text / book page (included in government oversight file)
File Size: 2.31 MB
Summary

This document is page 233 of an academic text regarding Artificial General Intelligence (AGI). It discusses the differences between 'Coherent Extrapolated Volition' (CEV) and 'Coherent Blended Volition,' arguing for human collaboration mediated by 'Global Brain technologies' over machine optimization. It also explores Stephen Omohundro's arguments regarding the safety benefits of creating a society of interacting AGIs rather than a single entity to mitigate risks, while acknowledging the dangers of a 'hard takeoff.' The page bears a House Oversight Bates stamp.

People (1)

Name Role Context
Stephen Omohundro Researcher / Author
Cited in the text regarding game-theoretic dynamics and AGI populations.

Organizations (2)

Name Type Context
Singularity Institute
Referenced in the footnote URL (singinst.org).
House Oversight Committee
Inferred from the Bates stamp 'HOUSE_OVERSIGHT'.

Relationships (1)

Stephen Omohundro Academic Citation Author (Unspecified)
Author cites Omohundro's argument regarding interacting AGI systems.

Key Quotes (3)

"The core difference between the two approaches is that in the CEV vision, the extrapolation and coherentization are to be done by a highly intelligent, highly specialized software program, whereas in the approach suggested here, these are to be carried out by collective activity of humans as mediated by Global Brain technologies."
Source
HOUSE_OVERSIGHT_013149.jpg
Quote #1
"Roughly speaking, if one has a society of AGIs rather than a single AGI, and all the members of the society share roughly similar ethics, then if one AGI starts to go "off the rails", its compatriots will be in a position to correct its behavior."
Source
HOUSE_OVERSIGHT_013149.jpg
Quote #2
"Of course, a society of AGIs is no protection against a single member undergoing a "hard takeoff" and drastically accelerating its intelligence simultaneously with shifting its ethical principles."
Source
HOUSE_OVERSIGHT_013149.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,717 characters)

12.8 Possible Benefits of Creating Societies of AGIs 233
we wish that extrapolated, interpreted as we wish that interpreted.
While a moving humanistic vision, this seems to us rather difficult to implement in a computer algorithm in a compellingly "right" way. It seems that there would be many different ways of implementing it, and the choice between them would involve multiple, highly subtle and non-rigorous human judgment calls 1. However, if a deep collective process of interactive scenario analysis and sharing is carried out, in order to arrive at some sort of Coherent Blended Volition, this process may well involve many of the same kinds of extrapolation that are conceived to be part of Coherent Extrapolated Volition. The core difference between the two approaches is that in the CEV vision, the extrapolation and coherentization are to be done by a highly intelligent, highly specialized software program, whereas in the approach suggested here, these are to be carried out by collective activity of humans as mediated by Global Brain technologies. Our perspective is that the definition of collective human values is probably better carried out via a process of human collaboration, rather than delegated to a machine optimization process; and also that the creation of deep-sharing-oriented Internet technologies, while a difficult task, is significantly easier and more likely to be done in the near future than the creation of narrow AI technology capable of effectively performing CEV style extrapolations.
12.8 Possible Benefits of Creating Societies of AGIs
One potentially interesting quality of the emerging Global Brain is the possible presence within it of multiple interacting AGI systems. Stephen Omohundro [Omo09] has argued that this is an important aspect, and that game-theoretic dynamics related to populations of roughly equally powerful agents, may play a valuable role in mitigating the risks associated with advanced AGI systems. Roughly speaking, if one has a society of AGIs rather than a single AGI, and all the members of the society share roughly similar ethics, then if one AGI starts to go "off the rails", its compatriots will be in a position to correct its behavior.
One may argue that this is actually a hypothesis about which AGI designs are safest, because a "community of AGIs" may be considered a single AGI with an internally community-like design. But the matter is a little subtler than that, if once considers AGI systems embedded in the Global Brain and human society. Then there is some substance to the notion of a population of AGIs systematically presenting themselves to humans and non-AGI software processes as separate entities.
Of course, a society of AGIs is no protection against a single member undergoing a "hard takeoff" and drastically accelerating its intelligence simultaneously with shifting its ethical principles. In this sort of scenario, one could have a single AGI rapidly become much more powerful and very differently oriented than the others, who would be left impotent to act so as to preserve their values. But this merely defers the issue to the point to be considered below, regarding "takeoff speed."
The operation of an AGI society may depend somewhat sensitively on the architectures of the AGI systems in question. Things will work better if the AGIs have a relatively easy way to inspect and comprehend much of the contents of each others' minds. This introduces a bias toward AGIs that more heavily rely on more explicit forms of knowledge representation.
1 The reader is encouraged to look at the original CEV essay online (http://singinst.org/upload/CEV.html) and make their own assessment.
HOUSE_OVERSIGHT_013149

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document