HOUSE_OVERSIGHT_016836.jpg

2.25 MB
View Original

Extraction Summary

2
People
3
Organizations
0
Locations
0
Events
2
Relationships
4
Quotes

Document Information

Type: Page from a policy report or academic book on ai safety
File Size: 2.25 MB
Summary

This document discusses the risks associated with superintelligent AI, arguing that the multidimensional nature of intelligence does not negate the potential threat to humans. It explores solutions to "Wiener's warning," suggesting the need to define a formal problem ($F$) that ensures AI behavior aligns with human happiness, while cautioning against simple reward maximization which leads to the "wireheading problem."

People (2)

Name Role Context
Kevin Kelly
Wiener

Organizations (3)

Relationships (2)

Kevin Kelly Author of cited article "The Myth of a Superhuman AI" Wired
Google Compared in terms of specialized intelligence capabilities DeepBlue

Key Quotes (4)

"Maximizing the objective may well cause problems for humans, but, by definition, the machine will not recognize those problems as problematic."
Source
HOUSE_OVERSIGHT_016836.jpg
Quote #1
"If “smarter than humans” is a meaningless concept, then “smarter than gorillas” is also meaningless, and gorillas therefore have nothing to fear from humans"
Source
HOUSE_OVERSIGHT_016836.jpg
Quote #2
"The optimal solution to this problem is not, as one might hope, to behave well, but instead to take control of the human and force him or her to provide a stream of maximal rewards."
Source
HOUSE_OVERSIGHT_016836.jpg
Quote #3
"This is known as the wireheading problem"
Source
HOUSE_OVERSIGHT_016836.jpg
Quote #4

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document