HOUSE_OVERSIGHT_016891.jpg

2.25 MB

Extraction Summary

4
People
1
Organizations
0
Locations
0
Events
1
Relationships
2
Quotes

Document Information

Type: Book excerpt / essay (included in house oversight committee investigation files)
File Size: 2.25 MB
Summary

This document appears to be page 88 of a philosophical essay or book chapter regarding Artificial General Intelligence (AGI). The text contrasts narrow AI (specifically chess engines) with AGI, arguing that true AGI implies creativity, the ability to refuse tasks, and moral agency. The author refutes a quote by Daniel Dennett, arguing that AGI can indeed be punished or held accountable (through resource restriction) and that AGI raised in a decent society is not destined to become an enemy of civilization. The document bears a House Oversight Committee Bates stamp, suggesting it was gathered as evidence, likely related to investigations into Epstein's connections to scientists and the Edge Foundation.

People (4)

Name Role Context
Alan Turing Historical Figure
Mentioned regarding his first design for a chess-playing AI in 1948.
Daniel Dennett Philosopher / Author
Referenced as having an essay in the same volume; quoted regarding the impossibility of punishing an AGI.
Superman Fictional Character
Used as an analogy by Daniel Dennett to describe the invulnerability of AGI.
William Blake Poet
Quoted regarding 'mind-forg’d manacles'.

Organizations (1)

Name Type Context
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_016891' indicating this document was part of a congressional investigatio...

Relationships (1)

Author (Unknown) Professional/Academic Daniel Dennett
The author references Dennett's essay 'for this volume,' suggesting they are contributors to the same book or collection.

Key Quotes (2)

"[L]ike Superman, they are too invulnerable to be able to make a credible promise. . . ."
Source
— Daniel Dennett (Quoted within the text arguing that AGI cannot be punished.)
HOUSE_OVERSIGHT_016891.jpg
Quote #1
"mind-forg’d manacles"
Source
— William Blake (Quoted by the author to describe self-imposed irrational limitations.)
HOUSE_OVERSIGHT_016891.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,431 characters)

search exhaustively. Every improvement in chess-playing AIs, between Alan Turing’s first design for one in 1948 and today’s, has been brought about by ingeniously confining the program’s attention (or making it confine its attention) ever more narrowly to branches likely to lead to that immutable goal. Then those branches are evaluated according to that goal.
That is a good approach to developing an AI with a fixed goal under fixed constraints. But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. An AGI is certainly capable of learning to win at chess—but also of choosing not to. Or deciding in mid-game to go for the most interesting continuation instead of a winning one. Or inventing a new game. A mere AI is incapable of having any such ideas, because the capacity for considering them has been designed out of its constitution. That disability is the very means by which it plays chess.
An AGI is capable of enjoying chess, and of improving at it because it enjoys playing. Or of trying to win by causing an amusing configuration of pieces, as grand masters occasionally do. Or of adapting notions from its other interests to chess. In other words, it learns and plays chess by thinking some of the very thoughts that are forbidden to chess-playing AIs.
An AGI is also capable of refusing to display any such capability. And then, if threatened with punishment, of complying, or rebelling. Daniel Dennett, in his essay for this volume, suggests that punishing an AGI is impossible:
[L]ike Superman, they are too invulnerable to be able to make a credible promise. . . . What would be the penalty for promise- breaking? Being locked in a cell or, more plausibly, dismantled? . . . The very ease of digital recording and transmitting—the breakthrough that permits software and data to be, in effect, immortal—removes robots from the world of the vulnerable. . . .
But this is not so. Digital immortality (which is on the horizon for humans, too, perhaps sooner than AGI) does not confer this sort of invulnerability. Making a (running) copy of oneself entails sharing one’s possessions with it somehow—including the hardware on which the copy runs—so making such a copy is very costly for the AGI. Similarly, courts could, for instance, impose fines on a criminal AGI which would diminish its access to physical resources, much as they do for humans. Making a backup copy to evade the consequences of one’s crimes is similar to what a gangster boss does when he sends minions to commit crimes and take the fall if caught: Society has developed legal mechanisms for coping with this.
But anyway, the idea that it is primarily for fear of punishment that we obey the law and keep promises effectively denies that we are moral agents. Our society could not work if that were so. No doubt there will be AGI criminals and enemies of civilization, just as there are human ones. But there is no reason to suppose that an AGI created in a society consisting primarily of decent citizens, and raised without what William Blake called “mind-forg’d manacles,” will in general impose such manacles on itself (i.e., become irrational) and / or choose to be an enemy of civilization.
88
HOUSE_OVERSIGHT_016891

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document