HOUSE_OVERSIGHT_016850.jpg

2.5 MB

Extraction Summary

4
People
2
Organizations
2
Locations
1
Events
2
Relationships
4
Quotes

Document Information

Type: Academic text / book page
File Size: 2.5 MB
Summary

The text argues against the creation of artificial conscious agents, suggesting that humanity has a surplus of natural conscious agents and only requires intelligent tools without rights or feelings. It explores the philosophical and legal difficulties of treating AI as morally responsible agents capable of signing binding contracts, noting that their lack of vulnerability and mortality makes credible commitment impossible. The author recounts a seminar challenge regarding robot autonomy and references works by Joanna J. Bryson.

People (4)

Organizations (2)

Name Type Context
Tufts
John Benjamins

Timeline (1 events)

Seminar at Tufts on artificial agents and autonomy

Locations (2)

Location Context

Relationships (2)

Key Quotes (4)

"We don’t need artificial conscious agents."
Source
HOUSE_OVERSIGHT_016850.jpg
Quote #1
"Tools do not have rights, and should not have feelings that could be hurt"
Source
HOUSE_OVERSIGHT_016850.jpg
Quote #2
"Give me the specs for a robot that could sign a binding contract with you—not as a surrogate for some human owner but on its own."
Source
HOUSE_OVERSIGHT_016850.jpg
Quote #3
"The problem for robots who might want to attain such an exalted status is that, like Superman, they are too invulnerable to be able to make a credible promise."
Source
HOUSE_OVERSIGHT_016850.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,898 characters)

is an ugly talent, reeking of racism or species-ism. Many people would find the
cultivation of such a ruthlessly skeptical approach morally repugnant, and we can
anticipate that even the most proficient system-users would occasionally succumb to the
temptation to “befriend” their tools, if only to assuage their discomfort with the execution
of their duties. No matter how scrupulously the AI designers launder the phony “human”
touches out of their wares, we can expect novel habits of thought, conversational gambits
and ruses, traps and bluffs to arise in this novel setting for human action. The comically
long lists of known side effects of new drugs advertised on television will be dwarfed by
the obligatory revelations of the sorts of questions that cannot be responsibly answered
by particular systems, with heavy penalties for those who “overlook” flaws in their
products. It is widely noted that a considerable part of the growing economic inequality
in today’s world is due to the wealth accumulated by digital entrepreneurs; we should
enact legislation that puts their deep pockets in escrow for the public good. Some of the
deepest pockets are voluntarily out in front of these obligations to serve society first and
make money secondarily, but we shouldn’t rely on good will alone.
We don’t need artificial conscious agents. There is a surfeit of natural conscious
agents, enough to handle whatever tasks should be reserved for such special and
privileged entities. We need intelligent tools. Tools do not have rights, and should not
have feelings that could be hurt, or be able to respond with resentment to “abuses” rained
on them by inept users.12 One of the reasons for not making artificial conscious agents is
that however autonomous they might become (and in principle, they can be as
autonomous, as self-enhancing or self-creating, as any person), they would not—without
special provision, which might be waived—share with us natural conscious agents our
vulnerability or our mortality.
I once posed a challenge to students in a seminar at Tufts I co-taught with
Matthias Scheutz on artificial agents and autonomy: Give me the specs for a robot that
could sign a binding contract with you—not as a surrogate for some human owner but on
its own. This isn’t a question of getting it to understand the clauses or manipulate a pen
on a piece of paper but of having and deserving legal status as a morally responsible
agent. Small children can’t sign such contracts, nor can those disabled people whose
legal status requires them to be under the care and responsibility of guardians of one sort
or another. The problem for robots who might want to attain such an exalted status is
that, like Superman, they are too invulnerable to be able to make a credible promise. If
they were to renege, what would happen? What would be the penalty for promise-
breaking? Being locked in a cell or, more plausibly, dismantled? Being locked up is
barely an inconvenience for an AI unless we first install artificial wanderlust that cannot
be ignored or disabled by the AI on its own (and it would be systematically difficult to
make this a foolproof solution, given the presumed cunning and self-knowledge of the
AI); and dismantling an AI (either a robot or a bedridden agent like Watson) is not killing
it, if the information stored in its design and software is preserved. The very ease of
digital recording and transmitting—the breakthrough that permits software and data to be,
12 Joanna J. Bryson, “Robots Should Be Slaves,” in Close Engagement with Artificial Companions, Yorick
Wilks, ed., (Amsterdam, The Netherlands: John Benjamins, 2010), pp. 63-74;
http://www.cs.bath.ac.uk/~jjb/ftp/Bryson-Slaves-Book09.html.
________, “Patiency Is Not a Virtue: AI and the Design of Ethical Systems,”
https://www.cs.bath.ac.uk/~jjb/ftp/Bryson-Patiency-AAAISS16.pdf.
47
HOUSE_OVERSIGHT_016850

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document