HOUSE_OVERSIGHT_016356.jpg

Extraction Summary

2
People
1
Organizations
0
Locations
0
Events
0
Relationships
5
Quotes

Document Information

Type: Essay / transcript / government production
File Size:
Summary

This document is page 136 of a larger production (Bates stamped HOUSE_OVERSIGHT_016356). It contains the text of an essay or transcript discussing the evolution of human-AI ecosystems, the risks of algorithmic tyranny, and the technical limitations of current machine learning (specifically regarding 'credit-assignment functions' and 'stupid neurons'). It contrasts modern AI implementation with Norbert Wiener's original concepts of cybernetics. While the prompt identifies this as Epstein-related, the text itself is purely academic/philosophical and contains no direct mention of Epstein, his associates, or financial dealings.

People (2)

Name Role Context
Norbert Wiener Mathematician/Philosopher
Referenced regarding the original notion of cybernetics and context in AI.
Unidentified Author Speaker/Writer
The person writing or speaking the text (uses 'We' and 'I' implicitly in rhetorical structure).

Organizations (1)

Name Type Context
House Oversight Committee
Indicated by the Bates stamp 'HOUSE_OVERSIGHT_016356'.

Key Quotes (5)

"Development of human-AI ecosystems is perhaps inevitable for a social species such as ourselves."
Source
HOUSE_OVERSIGHT_016356.jpg
Quote #1
"But there are also risks of a “tyranny of algorithms,” where unelected data experts are running the world."
Source
HOUSE_OVERSIGHT_016356.jpg
Quote #2
"Think Skynet-size. But how would you make Skynet something that’s about the human fabric?"
Source
HOUSE_OVERSIGHT_016356.jpg
Quote #3
"The good magic is that it has something called the credit-assignment function."
Source
HOUSE_OVERSIGHT_016356.jpg
Quote #4
"In some ways, it’s as far from Norbert Wiener’s original notion of cybernetics as you can get, because it isn’t contextualized; it’s a little idiot savant."
Source
HOUSE_OVERSIGHT_016356.jpg
Quote #5

Full Extracted Text

Complete text extracted from the document (3,587 characters)

Development of human-AI ecosystems is perhaps inevitable for a social species such as ourselves. We became social early in our evolution, millions of years ago. We began exchanging information with one another to stay alive, to increase our fitness. We developed writing to share abstract and complex ideas, and most recently we’ve developed computers to enhance our communication abilities. Now we’re developing AI and machine-learning models of ecosystems and sharing the predictions of those models to jointly shape our world through new laws and international agreements.
We live in an unprecedented historic moment, in which the availability of vast amounts of human behavioral data and advances in machine learning enable us to tackle complex social problems through algorithmic decision making. The opportunities for such a human-AI ecology to have positive social impact through fairer and more transparent decisions are obvious. But there are also risks of a “tyranny of algorithms,” where unelected data experts are running the world. The choices we make now are perhaps even more momentous than those we faced in the 1950s, when AI and cybernetics were created. The issues look similar, but they’re not. We have moved down the road, and now the scope is larger. It’s not just AI robots versus individuals. It’s AI guiding entire ecologies.
~~~
How can we make a good human-artificial ecosystem, something that’s not a machine society but a cyberculture in which we can all live as humans—a culture with a human feel to it? We don’t want to think small—for example, to talk only of robots and self-driving cars. We want this to be a global ecology. Think Skynet-size. But how would you make Skynet something that’s about the human fabric?
The first thing to ask is: What’s the magic that makes the current AI work? Where is it wrong and where is it right?
The good magic is that it has something called the credit-assignment function. What that lets you do is take “stupid neurons”—little linear functions—and figure out, in a big network, which ones are doing the work and strengthen them. It’s a way of taking a random bunch of switches all hooked together in a network and making them smart by giving them feedback about what works and what doesn’t. This sounds simple, but there’s some complicated math around it. That’s the magic that makes current AI work.
The bad part of it is, because those little neurons are stupid, the things they learn don’t generalize very well. If an AI sees something it hasn’t seen before, or if the world changes a little bit, the AI is likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it’s as far from Norbert Wiener’s original notion of cybernetics as you can get, because it isn’t contextualized; it’s a little idiot savant.
But imagine that you took away those limitations: Imagine that instead of using dumb neurons, you used neurons in which real-world knowledge was embedded. Maybe instead of linear neurons, you used neurons that were functions in physics, and then you tried to fit physics data. Or maybe you put in a lot of knowledge about humans and how they interact with one another—the statistics and characteristics of humans.
When you add this background knowledge and surround it with a good credit-assignment function, then you can take observational data and use the credit-assignment function to reinforce the functions that are producing good answers. The result is an AI that works extremely well and can generalize. For instance, in solving physical
136
HOUSE_OVERSIGHT_016356

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document