HOUSE_OVERSIGHT_016885.jpg

2.55 MB

Extraction Summary

1
People
0
Organizations
1
Locations
0
Events
0
Relationships
4
Quotes

Document Information

Type: Manuscript page / essay / evidence document
File Size: 2.55 MB
Summary

This document is page 82 of a larger manuscript or essay (bearing a House Oversight Bates stamp) that discusses the theoretical risks of Artificial Intelligence (AI). The text argues against 'digital megalomania' and the fear of a 'Doomsday Computer,' suggesting that fears of AI turning the universe into paper clips or enslaving humans are based on contradictory premises. It references Wiener (Norbert Wiener) and compares AI safety to the evolution of industrial safety standards in Western societies.

People (1)

Name Role Context
Wiener Referenced Figure
Refers to Norbert Wiener, a pioneer in cybernetics, regarding the 'value-alignment problem' and humanizing norms.

Locations (1)

Location Context
Mentioned in the context of historical safety standards in the 20th century.

Key Quotes (4)

"The fear is that we might give an AI system a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned."
Source
HOUSE_OVERSIGHT_016885.jpg
Quote #1
"If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies."
Source
HOUSE_OVERSIGHT_016885.jpg
Quote #2
"The way to deal with this threat is straightforward: Don’t build one."
Source
HOUSE_OVERSIGHT_016885.jpg
Quote #3
"Whereas at the turn of the 20th century Western societies tolerated shocking rates of mutilation and death in industrial, domestic, and transportation accidents, over the course of the century the value of human life"
Source
HOUSE_OVERSIGHT_016885.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,908 characters)

change, and in which no stepwise, hierarchical, or abstract reasoning is necessary. Many of the successes come not from a better understanding of the workings of intelligence but from the brute-force power of faster chips and Bigger Data, which allow the programs to be trained on millions of examples and generalize to similar new ones. Each system is an idiot savant, with little ability to leap to problems it was not set up to solve, and a brittle mastery of those it was. And to state the obvious, none of these programs has made a move toward taking over the lab or enslaving its programmers.
Even if an artificial intelligence system tried to exercise a will to power, without the cooperation of humans it would remain an impotent brain in a vat. A superintelligent system, in its drive for self-improvement, would somehow have to build the faster processors that it would run on, the infrastructure that feeds it, and the robotic effectors that connect it to the world—all impossible unless its human victims worked to give it control of vast portions of the engineered world. Of course, one can always imagine a Doomsday Computer that is malevolent, universally empowered, always on, and tamperproof. The way to deal with this threat is straightforward: Don’t build one.
What about the newer AI threat, the value-alignment problem, foreshadowed in Wiener’s allusions to stories of the Monkey’s Paw, the genie, and King Midas, in which a wisher rues the unforeseen side effects of his wish? The fear is that we might give an AI system a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned. If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies. If we asked it to maximize human happiness, it might implant us all with intravenous dopamine drips, or rewire our brains so we were happiest sitting in jars, or, if it had been trained on the concept of happiness with pictures of smiling faces, tile the galaxy with trillions of nanoscopic pictures of smiley-faces.
Fortunately, these scenarios are self-refuting. They depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so idiotic that they would give it control of the universe without testing how it works; and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding. The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might forget to install and test; it is intelligence. So is the ability to interpret the intentions of a language user in context.
When we put aside fantasies like digital megalomania, instant omniscience, and perfect knowledge and control of every particle in the universe, artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.
The last criterion is particularly significant. The culture of safety in advanced societies is an example of the humanizing norms and feedback channels that Wiener invoked as a potent causal force and advocated as a bulwark against the authoritarian or exploitative implementation of technology. Whereas at the turn of the 20th century Western societies tolerated shocking rates of mutilation and death in industrial, domestic, and transportation accidents, over the course of the century the value of human life
82
HOUSE_OVERSIGHT_016885

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document