HOUSE_OVERSIGHT_016870.jpg

2.43 MB

Extraction Summary

4
People
3
Organizations
2
Locations
1
Events
3
Relationships
4
Quotes

Document Information

Type: Book page or report excerpt
File Size: 2.43 MB
Summary

This document discusses the critical necessity of AI safety research and goal alignment before the arrival of Artificial General Intelligence (AGI). It argues that the primary risk of superintelligent AI is competence rather than malice, emphasizing that an AI's goals must be beneficial and aligned with human values to prevent catastrophic outcomes similar to historical atrocities.

Organizations (3)

Name Type Context
MIT
NASA
Nazi SS

Timeline (1 events)

moon-landing mission

Locations (2)

Location Context

Relationships (3)

to

Key Quotes (4)

"Investments in AI should be accompanied by funding for research on ensuring its beneficial use."
Source
HOUSE_OVERSIGHT_016870.jpg
Quote #1
"the real risk with AGI isn’t malice but competence."
Source
HOUSE_OVERSIGHT_016870.jpg
Quote #2
"Intelligence isn’t good or evil but morally neutral."
Source
HOUSE_OVERSIGHT_016870.jpg
Quote #3
"A perfectly obedient superintelligence whose goals automatically align with those of its human owner would be like Nazi SS-Obersturmbannführer Adolf Eichmann on steroids."
Source
HOUSE_OVERSIGHT_016870.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,686 characters)

(3) Investments in AI should be accompanied by funding for research on ensuring its beneficial use. . . . How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked.18
The first two involve not getting stuck in suboptimal Nash equilibria. An out-of-control arms race in lethal autonomous weapons that drives the price of automated anonymous assassination toward zero will be very hard to stop once it gains momentum. The second goal would require reversing the current trend in some Western countries where sectors of the population are getting poorer in absolute terms, fueling anger, resentment, and polarization. Unless the third goal can be met, all the wonderful AI technology we create might harm us, either accidentally or deliberately.
AI safety research must be carried out with a strict deadline in mind: Before AGI arrives, we need to figure out how to make AI understand, adopt, and retain our goals. The more intelligent and powerful machines get, the more important it becomes to align their goals with ours. As long as we build relatively dumb machines, the question isn’t whether human goals will prevail but merely how much trouble the machines can cause before we solve the goal-alignment problem. If a superintelligence is ever unleashed, however, it will be the other way around: Since intelligence is the ability to accomplish goals, a superintelligent AI is by definition much better at accomplishing its goals than we humans are at accomplishing ours, and will therefore prevail.
In other words, the real risk with AGI isn’t malice but competence. A superintelligent AGI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. People don’t think twice about flooding anthills to build hydroelectric dams, so let’s not place humanity in the position of those ants. Most researchers argue that if we end up creating superintelligence, we should make sure it’s what AI-safety pioneer Eliezer Yudkowsky has termed “friendly AI”—AI whose goals are in some deep sense beneficial.
The moral question of what these goals should be is just as urgent as the technical questions about goal alignment. For example, what sort of society are we hoping to create, where we find meaning and purpose in our lives even though we, strictly speaking, aren’t needed? I’m often given the following glib response to this question: “Let’s build machines that are smarter than us and then let them figure out the answer!” This mistakenly equates intelligence with morality. Intelligence isn’t good or evil but morally neutral. It’s simply an ability to accomplish complex goals, good or bad. We can’t conclude that things would have been better if Hitler had been more intelligent. Indeed, postponing work on ethical issues until after goal-aligned AGI is built would be irresponsible and potentially disastrous. A perfectly obedient superintelligence whose goals automatically align with those of its human owner would be like Nazi SS-Obersturmbannführer Adolf Eichmann on steroids. Lacking moral compass or inhibitions of its own, it would, with ruthless efficiency, implement its owner’s goals, whatever they might be.19
When I speak of the need to analyze technology risk, I’m sometimes accused of scaremongering. But here at MIT, where I work, we know that such risk analysis isn’t scaremongering: It’s safety engineering. Before the moon-landing mission, NASA
18 https://futureoflife.org/ai-principles/
19 See, for example, Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil (New York: Penguin Classics, 2006).
67
HOUSE_OVERSIGHT_016870

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document