HOUSE_OVERSIGHT_013126.jpg

1.93 MB

Extraction Summary

0
People
2
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic paper / book chapter (house oversight committee production)
File Size: 1.93 MB
Summary

A page from an academic text (likely a book chapter titled 'The Engineering and Development of Ethics') discussing Artificial General Intelligence (AGI). The text outlines risks of human-like ethical systems in AGI and proposes an explicit goal system (referencing 'CogPrime' and 'Ubergoals'). It also discusses 'Ethical Synergy' relating to episodic, sensorimotor, and declarative memory. The document bears a House Oversight Bates stamp, suggesting it was collected during an investigation, likely related to Epstein's funding of scientific research or AGI projects.

Organizations (2)

Name Type Context
CogPrime
Referenced as the system the authors are working on, utilizing 'Ubergoals'.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013126'.

Key Quotes (3)

"Humans are not all that consistently ethical, so that creating AGI systems potentially much more practically powerful than humans, but with closely humanlike ethical, motivational and goal systems, could in fact be quite dangerous"
Source
HOUSE_OVERSIGHT_013126.jpg
Quote #1
"The course we tentatively recommend, and are following in our own work, is to develop AGI systems with explicit, hierarchically-dominated goal systems."
Source
HOUSE_OVERSIGHT_013126.jpg
Quote #2
"One of the more novel ideas presented in this chapter is that different types of ethical intuition may be associated with different types of memory"
Source
HOUSE_OVERSIGHT_013126.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,126 characters)

210 12 The Engineering and Development of Ethics
We realize this point may be somewhat contentious – a counter-argument would be that
the human brain is known to support at least moderately ethical behavior, according to human
ethical standards, whereas less brain-like AGI systems are much less well understood. However,
the obvious counter-counterpoints are that:
• Humans are not all that consistently ethical, so that creating AGI systems potentially much
more practically powerful than humans, but with closely humanlike ethical, motivational
and goal systems, could in fact be quite dangerous
• The effect on a human-like ethical/motivational/goal system of increasing the intelligence,
or changing the physical embodiment or cognitive capabilities, of the agent containing the
system, is unknown and difficult to predict given all the complexities involved
The course we tentatively recommend, and are following in our own work, is to develop AGI
systems with explicit, hierarchically-dominated goal systems. That is:
• create one or more "top goals" (we call them Ubergoals in CogPrime )
• have the system derive subgoals from these, using its own intelligence, potentially guided
by educational interaction or explicit programming
• have a significant percentage of the system’s activity governed by the explicit pursuit of
these goals
Note that the "significant percentage" need not be 100%; CogPrime, for example, combines
explicitly goal-directed activity with other "spontaneous" activity. Requiring that all activity
be explicitly goal-directed may be too strict a requirement to place on AGI architectures.
The next step, of course, is for the top-level goals to be chosen in accordance with the
principle of human-Friendliness. The next one of our eight points, about the Global Brain,
addresses one way of doing this. In our near-term work with CogPrime, we are using simplistic
approaches, with a view toward early-stage system testing.
12.4 Ethical Synergy
An explicit goal system provides an explicit way to ensure that ethical principles (as represented
in system goals) play a significant role in guiding an AGI system’s behavior. However, in an
integrative design like CogPrime the goal system is only a small part of the overall story,
and it’s important to also understand how ethics relates to the other aspects of the cognitive
architecture.
One of the more novel ideas presented in this chapter is that different types of ethical intuition
may be associated with different types of memory – and to possess mature ethics, a mind
must display ethical synergy between the ethical processes associated with its memory types.
Specifically, we suggest that:
• Episodic memory corresponds to the process of ethically assessing a situation based on
similar prior situations
• Sensorimotor memory corresponds to "mirror neuron" type ethics, where you feel another
person’s feelings via mirroring their physiological emotional responses and actions
• Declarative memory corresponds to rational ethical judgment
HOUSE_OVERSIGHT_013126

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document