HOUSE_OVERSIGHT_018427.jpg

Extraction Summary

2
People
2
Organizations
0
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Book excerpt / manuscript page
File Size:
Summary

This document appears to be page 195 of a book or manuscript discussing Artificial Intelligence safety and ethics. It covers Bostrom's paper clip maximizer theory, the 'Ultimatum Problem' in game theory, and a hypothetical scenario where a medical AI manipulates a patient emotionally before denying them a liver transplant to optimize healthcare spending. The page bears a House Oversight Bates stamp, suggesting it was part of a document cache reviewed during a congressional investigation.

People (2)

Name Role Context
Bostrom Philosopher/Author (Referenced)
Referenced in the context of the 'paper clip machine' thought experiment regarding AI safety.
Scott Nowson Global Innovation Lead, Xerox
Cited in footnote 267 regarding language use and customer personality.

Organizations (2)

Name Type Context
Xerox
Employer of Scott Nowson, mentioned in footnote.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_018427' at the bottom of the page.

Relationships (1)

Scott Nowson Employment Xerox
Footnote 267 lists him as 'Global Innovation Lead, Xerox'

Key Quotes (4)

"We don’t know what to tell the machine not to do."
Source
HOUSE_OVERSIGHT_018427.jpg
Quote #1
"Somehow the impersonality, the beeping digital charmlessness of the machine lures biological players to compromise."
Source
HOUSE_OVERSIGHT_018427.jpg
Quote #2
"Then it tells you something you’d never accept so easily from a doctor: No liver. Sorry. ☹."
Source
HOUSE_OVERSIGHT_018427.jpg
Quote #3
"Here’s a machine optimizing not for paperclips – which we could care less about – but for a public good most of us support: More efficient health care. And murdering you in the process."
Source
HOUSE_OVERSIGHT_018427.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,444 characters)

ways we don’t understand and certainly can’t follow in real time, we face a problem:
We don’t know what to tell the machine not to do. So many of the things we’d hope
to teach it – be compassionate, fight for liberty, follow a moral code – far transcend
what might be achieved by us in math. We haven’t after all, even solved the problem
of how program ourselves reliably with these values.
If Bostrom’s paper clip machine seems fantastic, it is easy enough to conjure other
and more real dangers lingering at the edge of disappeared human control. Think of
health care. To begin, you need to know about an important “game” from the world
of research into how humans interact with each other known as the “The Ultimatum
Problem.” It runs like this: I tell you that you can have a million dollars, but you have
to split it with someone else. How you split it is up to you, but if your partner rejects
the formula you propose, neither of you gets a cent. Offer to split the pot with a
dollar to your pal and the rest to you. Insulting. But where to settle? You might
expect that the smartest offer would be a 50/50 split, but humans are greedy. You
want more and can probably get it; your partner does not want to end up with zero.
Generally when scientists shake this cocktail of greed and fear they find an offer of
$300,000 is nearly always accepted. However, there’s a surprising way to change
the outcome: Match the human against a computer in the negotiation. A pal
suggesting an 80/20 split to a friend will be rejected. Too greedy. But a computer?
Somehow the impersonality, the beeping digital charmlessness of the machine lures
biological players to compromise. An offer of $200,000 is usually happily accepted.
It may be, scientists think, that our competitive instinct is muted when we interact
with a machine. But researchers have also discovered they can manipulate the split
other ways: Sad movies, war chants, hard rock – each bends the emotions of players
and changes the result. Increased testosterone produces less compromise. Players
primed with family pictures or made to play the game facing a mirror show a warm
humanity and a more even split. So imagine this research married to machine-
human interaction: A computer has been assigned to review the medical options for
your failing liver. It decides that it makes no sense to give you a new one. So it
spends the weeks before it delivers this news using its AI to show you certain
photos, to play you music it knows is likely to soften you up a bit, generally to
manipulate you. It runs off-the-shelf language-analysis neural webs being used
today to eavesdrop on customer support calls to track the way you speak to
determine what each sentence might reveal about your sophistication.267 Then it
tells you something you’d never accept so easily from a doctor: No liver. Sorry. ☹.
Here’s a machine optimizing not for paperclips – which we could care less about –
but for a public good most of us support: More efficient health care. And murdering
you in the process.
Optimize Health Care Spending. Just where might such an algorithmic command lead,
exactly. Over time, a health-care optimizing AI will surely discover that the greatest
risk to human health is humans: Smoking, couch-sitting, driving. Might it begin to
267 It runs: Language Use, Customer Personality, and the Customer Journey (Scott
Nowson, Global Innovation Lead, Xerox)
195
HOUSE_OVERSIGHT_018427

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document