HOUSE_OVERSIGHT_016878.jpg

1.6 MB

Extraction Summary

1
People
7
Organizations
2
Locations
2
Events
2
Relationships
3
Quotes

Document Information

Type: Book excerpt / evidence page (house oversight committee)
File Size: 1.6 MB
Summary

This document appears to be a page (likely page 75 based on the footer) from a book or manifesto regarding Artificial Intelligence safety and the future of humanity. The author expresses optimism about the growing awareness of 'AI-risk' among researchers and global elites, citing organizations like OpenAI, Google Brain, and the OECD. The page bears a House Oversight Bates stamp, suggesting it was included in evidence regarding investigations into ties between scientific/academic figures and Jeffrey Epstein.

People (1)

Name Role Context
Author Writer/Researcher
First-person narrator ('I') discussing AI safety and the future of humanity. (Contextually likely Max Tegmark, author...

Organizations (7)

Name Type Context
DeepMind
Producing technical AI-safety papers.
OpenAI
Producing technical AI-safety papers.
Google Brain
Producing technical AI-safety papers.
Institute of Electrical and Electronics Engineers (IEEE)
Covering AI safety in reports and presentations.
World Economic Forum
Covering AI safety.
Organization for Economic Cooperation and Development (OECD)
Covering AI safety.
House Oversight Committee
Source of the document production (via Bates stamp).

Timeline (2 events)

2015
Survey/Data point noting the message had reached and converted 40 percent of AI researchers.
Global
AI Researchers
July 2017
Release of a Chinese AI manifesto containing sections on AI safety supervision.
China
Chinese Government

Locations (2)

Location Context
Mentioned regarding the 'Chinese AI manifesto'.
Used in comparison to the number of star systems in the galaxy.

Relationships (2)

DeepMind Collaborative OpenAI
Text describes 'collaborative problem-solving spirit flourishing between the AI-safety research teams' despite being 'otherwise very competitive organizations'.
Google Brain Collaborative DeepMind
Text describes 'collaborative problem-solving spirit flourishing between the AI-safety research teams'.

Key Quotes (3)

"concept of 'Pareto-topia': the idea that AI, if done right, can bring about a future in which everyone 's lives are hugely improved, a future where there are no losers."
Source
HOUSE_OVERSIGHT_016878.jpg
Quote #1
"I’m cautiously optimistic that the AI-risk message can save humanity from extinction, just as the Soviet-occupation message ended up liberating hundreds of millions of people."
Source
HOUSE_OVERSIGHT_016878.jpg
Quote #2
"Here’s to our next hundred thousand years! And don’t hesitate to speak the truth, even if your voice trembles."
Source
HOUSE_OVERSIGHT_016878.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (2,343 characters)

concept of “Pareto-topia”: the idea that AI, if done right, can bring about a future in
which everyone ’s lives are hugely improved, a future where there are no losers. A key
realization here is that what chiefly prevents humanity from achieving its full potential
might be our instinctive sense that we’re in a zero-sum game—a game in which players
are supposed to eke out small wins at the expense of others. Such an instinct is seriously
misguided and destructive in a “game” where everything is at stake and the payoff is
literally astronomical. There are many more star systems in our galaxy alone than there
are people on Earth.
Hope
As of this writing, I’m cautiously optimistic that the AI-risk message can save humanity
from extinction, just as the Soviet-occupation message ended up liberating hundreds of
millions of people. As of 2015, it had reached and converted 40 percent of AI
researchers. It wouldn’t surprise me if a new survey now would show that the majority
of AI researchers believe AI safety to be an important issue.
I’m delighted to see the first technical AI-safety papers coming out of DeepMind,
OpenAI, and Google Brain and the collaborative problem-solving spirit flourishing
between the AI-safety research teams in these otherwise very competitive organizations.
The world’s political and business elite are also slowly waking up: AI safety has
been covered in reports and presentations by the Institute of Electrical and Electronics
Engineers (IEEE), the World Economic Forum, and the Organization for Economic
Cooperation and Development (OECD). Even the recent (July 2017) Chinese AI
manifesto contained dedicated sections on “AI safety supervision” and “Develop[ing]
laws, regulations, and ethical norms” and establishing “an AI security and evaluation
system” to, among other things, “[e]nhance the awareness of risk.” I very much hope that
a new generation of leaders who understand the AI Control Problem and AI as the
ultimate environmental risk can rise above the usual tribal, zero-sum games and steer
humanity past these dangerous waters we are in—thereby opening our way to the stars
that have been waiting for us for billions of years.
Here’s to our next hundred thousand years! And don’t hesitate to speak the truth,
even if your voice trembles.
75
HOUSE_OVERSIGHT_016878

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document