HOUSE_OVERSIGHT_013141.jpg

2.72 MB

Extraction Summary

1
People
1
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Academic book chapter / scientific paper page (evidence exhibit)
File Size: 2.72 MB
Summary

This document appears to be a page from an academic text or book (Page 225, Section 12.5) discussing 'Clarifying the Ethics of Justice' in the context of Artificial General Intelligence (AGI). The text argues that AGI ethics cannot be simply hard-coded but must be developed through shared experiences and interactions with humans, citing Eliezer Yudkowsky. The document bears the stamp 'HOUSE_OVERSIGHT_013141', indicating it was part of a document production for a Congressional investigation, likely related to Jeffrey Epstein's connections to the scientific community, though Epstein is not explicitly named on this specific page.

People (1)

Name Role Context
Eliezer Yudkowsky AI Theorist / Writer
Referenced in the text regarding his insistence that 'what we need are not ethicists of science and engineering, but ...

Organizations (1)

Name Type Context
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013141' at the bottom of the page.

Relationships (1)

Eliezer Yudkowsky Academic Citation Author (Unknown)
Author cites Yudkowsky's views on ethical scientists.

Key Quotes (4)

"what we need are not ethicists of science and engineering, but rather ethical scientists and engineers"
Source
HOUSE_OVERSIGHT_013141.jpg
Quote #1
"We want early-stage AGIs to grow up in a situation where their minds are primarily and ongoingly shaped by shared experiences with humans."
Source
HOUSE_OVERSIGHT_013141.jpg
Quote #2
"Supplying AGIs with abstract ethical principles is not likely to do the trick"
Source
HOUSE_OVERSIGHT_013141.jpg
Quote #3
"AGI intransigence and enmity is not inevitable, but what is inevitable is that a learning system will acquire ideas about both theory and actions from the other intelligent entities in its environment."
Source
HOUSE_OVERSIGHT_013141.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (4,438 characters)

12.5 Clarifying the Ethics of Justice: Extending the Golden Rule in to a Multifactorial Ethical Model 225
And this context-sensitivity has the result of intertwining ethical judgment with all sorts
of other judgments – making it effectively impossible to extract “ethics” as one aspect of an
intelligent system, separate from other kinds of thinking and acting the system does. This
resonates with many prior observations by others, e.g. Eliezer Yudkowsky’s insistence that
what we need are not ethicists of science and engineering, but rather ethical scientists and
engineers – because the most meaningful and important ethical judgments regarding science
and engineering generally come about in a manner that’s thoroughly intertwined with technical
practice, and hence are very difficult for a non-practitioner to richly appreciate [Gil82].
What this context-sensitivity means is that, unless humans and AGIs are experiencing the
same sorts of contexts, and perceiving these contexts in at least approximately parallel ways,
there is little hope of translating the complex of human ethical judgments to these AGIs. This
conclusion has significant implications for which routes to AGI are most likely to lead to success
in terms of AGI ethics. We want early-stage AGIs to grow up in a situation where their minds
are primarily and ongoingly shaped by shared experiences with humans. Supplying AGIs with
abstract ethical principles is not likely to do the trick, because the essence of human ethics
in real life seems to have a lot to do with its intuitively appropriate application in various
contexts. We transmit this sort of ethical praxis to humans via shared experience, and it seems
most probably that in the case of AGIs the transmission must be done the same sort of way.
Some may feel that simplistic maxims are less “error prone” than more nuanced, context-
sensitive ones. But the history of teaching ethics to human students does not support the idea
that limiting ethical pedagogy to slogans provides much value in terms of ethical development. If
one proceeds from the idea that AGI ethics must be hard-coded in order to work, then perhaps
the idea that simpler ethics means simpler algorithms, and therefore less error potential, has
some merit as an initial state. However, any learning system quickly diverges from its initial
state, and an ongoing, nuanced relationship between AGIs and humans will – whether we like
it or not – form the basis for developmental AGI ethics. AGI intransigence and enmity is
not inevitable, but what is inevitable is that a learning system will acquire ideas about both
theory and actions from the other intelligent entities in its environment. Either we teach AGIs
positive ethics through our interactions with them – both presenting ethical theory and behaving
ethically to them – or the potential is there for them to learn antisocial behavior from us even
if we pre-load them with some set of allegedly inviolable edicts.
All in all, developmental ethics is not as simple as many people hope. Simplistic approaches
often lead to disastrous consequences among humans, and there is no reason to think this
would be any different in the case of artificial intelligences. Most problems in ethics have cases
in which a simplistic ethical formulation requires substantial revision to deal with extenuating
circumstances and nuances found in real world situations. Our goal in this chapter is not to
enumerate a full set of complex networks of interacting ethical formulations as applicable to
AGI systems (that is a project that will take years of both theoretical study and hands-on
research), but rather to point out that this program must be undertaken in order to facilitate
a grounded and logically defensible system of ethics for artificial intelligences, one which is as
unlikely to be undermined by subsequent self-modification of the AGI as is possible. Even so,
there is still the risk that whatever predispositions are imparted to the AGIs through initial
codification of ethical ideas in the system’s internal logic representation, and through initial
pedagogical interactions with its learning systems, will be undermined through reinforcement
learning of antisocial behavior if humans do not interact ethically with AGIs. Ethical treatment
is a necessary task for grounding ethics and making them unlikely to be distorted during internal
rewriting.
HOUSE_OVERSIGHT_013141

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document