HOUSE_OVERSIGHT_018429.jpg

Extraction Summary

3
People
3
Organizations
0
Locations
1
Events
1
Relationships
5
Quotes

Document Information

Type: Book excerpt / government exhibit
File Size:
Summary

This document is a page (197) from a book, seemingly titled or related to 'The Seventh Sense,' marked with a House Oversight stamp (018429). The text discusses the existential risks of Artificial Intelligence (AI), specifically regarding AI-enabled weapons systems, the manipulation of human cognition, and the future struggle between those who possess the 'Seventh Sense' and those who do not. It cites a 2015 academic paper by Kaj Sotala and Roman V. Yampolskiy regarding catastrophic AGI risk.

People (3)

Name Role Context
Kaj Sotala Author/Researcher
Cited in footnote 270 regarding catastrophic AGI risk.
Roman V. Yampolskiy Author/Researcher
Cited in footnote 270 regarding catastrophic AGI risk.
McGuyver Fictional Character
Used as a metaphor for an improvised AI system.

Organizations (3)

Name Type Context
Physica Scripta
Academic journal cited in the footnote.
Coke
Brand mentioned in a hypothetical AI suggestion scenario.
House Oversight Committee
Implied by the document stamp 'HOUSE_OVERSIGHT'.

Timeline (1 events)

2015
Publication of the cited paper 'Responses to catastrophic AGI risk: a survey'.
Physica Scripta

Relationships (1)

Kaj Sotala Co-authors Roman V. Yampolskiy
Listed together in footnote citation.

Key Quotes (5)

"In the end, the people without the Seventh Sense will lose, because people who fight the future always lose."
Source
HOUSE_OVERSIGHT_018429.jpg
Quote #1
"It seems likely to me that long before we’re playing pinochle with some smart box over the fate of our livers, an AI-enabled weapons system of sort will come ripping through our world."
Source
HOUSE_OVERSIGHT_018429.jpg
Quote #2
"Allowed: 'You should rehydrate.' Not allowed: 'You should have a Coke. It would make people like you.'"
Source
HOUSE_OVERSIGHT_018429.jpg
Quote #3
"The Boxers against the Box."
Source
HOUSE_OVERSIGHT_018429.jpg
Quote #4
"Can we protect ourelves?"
Source
HOUSE_OVERSIGHT_018429.jpg
Quote #5

Full Extracted Text

Complete text extracted from the document (3,677 characters)

It seems likely to me that long before we’re playing pinochle with some smart box over the fate of our livers, an AI-enabled weapons system of sort will come ripping through our world. This need not be a fully-escaped McGuyver system making pipe-bombs from our cars; even existing technology tools when salted with AI can be slipped into an accidental gear – particularly when they begin interacting with one another. Such AI weapons systems will be trained to operate and move along the most invisible elements of our topologies, sometimes pulling violently at life support cords for currency or logistics or trade but also – perhaps more dangeroulsy – we will find them insinuated into cognition systems we will come to depend upon, whispering into our ears or tapping us on the shoulder “Look that way!” when in fact we should be gazing at some other gaping hole. Of course the problems of how AI-enabled machines are permitted to touch our commerce or our brains or our health have to be considered. Allowed: “You should rehydrate.” Not allowed: “You should have a Coke. It would make people like you.” But these “civilian” problems will be solved, somehow, I think. We haven’t yet figured that the culmination of network attack and defense is racing at us and will emerge in the form of smartened weapons. The project of developing a national security or arms control doctrines or treaty frames in these fields has not even begun. Really this means, since we’ve no hope of honestly controlling every AI that could be possibly written: How do we design the topologies on which AIs operate?270 Can we protect ourelves? In the rooms where AI systems “values” are being carefully poked and limited, it’s vitally important that the lessons of history and war have a first place at the table. Such a conversation, informed by all the popping Seventh Sense warnings we’ve seen in this book and by a catalog of specificly sharp dangers of diplomacy and security, must happen in cold blood. It will be impossible to tackle these problems cleanly in the heat of an emergency. In our jack-filling enthusiasm for the new, we’d be wise to also gate ourselves and these AI-fired dangers as best we can. For as long as possible. Which, unfortunately, will not be forever.
At the start of this book, I explained how the future will unspool: First, there will be a struggle between those who have the Seventh Sense and those who don’t. This is playing out around us today. In the end, the people without the Seventh Sense will lose, because people who fight the future always lose. Then there will be a battle between different groups who have the Seventh Sense, each wired for different aims and instincts. Networks of terror taking on networks of bots. Gene adjusting health protocols competing to become the platform of choice. This battle for the topological high-ground, where unimaginable profit, power and security linger, awaits us. If we’re lucky, it will unfold in a co-evolutionary way. Everyone will be better off. But then, finally, there will be a contest between the winners of final topological mastery and the system itself. The Boxers against the Box. The AI machines will have the Seventh Sense, too. Just as computers can see better than us, hear better, or remember longer so the device webs of our future will own this new, essential sense with unimpeachable fidelity. They will glow with it, honed to a sensitive sharpness more acute than any human will ever achieve. What do we do then? We are already
270 Really this means: Kaj Sotala and Roman V. Yampolskiy, “Responses to catastrophic AGI risk: a survey”, Physica Scripta 90 (2015)
197
HOUSE_OVERSIGHT_018429

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document