HOUSE_OVERSIGHT_016940.jpg

2.47 MB

Extraction Summary

2
People
1
Organizations
0
Locations
0
Events
0
Relationships
4
Quotes

Document Information

Type: Book excerpt / report page / interview transcript (house oversight production)
File Size: 2.47 MB
Summary

This document appears to be a page (137) from a larger text, likely an academic essay or interview transcript, produced during a House Oversight investigation (Bates stamp 016940). The text discusses theoretical concepts linking Artificial Intelligence, 'social physics,' and evolutionary biology, specifically focusing on 'distributed Thompson sampling' and how human social networks function similarly to AI credit-assignment algorithms. While no specific individuals are named, the subject matter strongly suggests the work of Alex Pentland (MIT Media Lab), a known associate in the Epstein context.

People (2)

Name Role Context
Unnamed Speaker/Author Academic/Scientist
The narrator discussing AI, social physics, and evolutionary biology. (Note: The terminology 'social physics' is stro...
Students Researchers
Mentioned by the speaker: 'My students and I are looking at how people make decisions'

Organizations (1)

Name Type Context
House Oversight Committee
Indicated by the Bates stamp 'HOUSE_OVERSIGHT_016940'

Key Quotes (4)

"This 'social physics' works because human behavior is determined as much by the patterns of our culture as by rational, individual thinking."
Source
HOUSE_OVERSIGHT_016940.jpg
Quote #1
"What would happen if you had a network of people in which you could reinforce the connections that were helping and minimize the connections that weren’t?"
Source
HOUSE_OVERSIGHT_016940.jpg
Quote #2
"Culture is the result of this sort of human AI as applied to human problems; it is the process of building social structures by reinforcing the good connections and penalizing the bad."
Source
HOUSE_OVERSIGHT_016940.jpg
Quote #3
"It’s called 'distributed Thompson sampling,' a mathematical algorithm used in choosing, out of a set of possible actions with unknown payoffs, the action that maximizes the expected reward in respect to the actions."
Source
HOUSE_OVERSIGHT_016940.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,740 characters)

problems, it often takes only a couple of noisy data points to get something that’s a beautiful description of a phenomenon, because you’re putting in knowledge about how physics works. That’s in huge contrast to normal AI, which requires millions of training examples and is very sensitive to noise. By adding the appropriate background knowledge, you get much more intelligence.
Similar to the physical-systems case, if we make neurons that know a lot about how humans learn from each other, then we can detect human fads and predict human behavior trends in surprisingly accurate and efficient ways. This “social physics” works because human behavior is determined as much by the patterns of our culture as by rational, individual thinking. These patterns can be described mathematically and employed to make accurate predictions.
This idea of a credit-assignment function reinforcing connections between neurons that are doing the best work is the core of current AI. If you make those little neurons smarter, the AI gets smarter. So, what would happen if we replaced the neurons with people? People have lots of capabilities. They know lots of things about the world; they can perceive things in a broadly competent, human way. What would happen if you had a network of people in which you could reinforce the connections that were helping and minimize the connections that weren’t?
That begins to sound like a society, or a company. We all live in a human social network. We’re reinforced for doing things that seem to help everybody and discouraged from doing things that are not appreciated. Culture is the result of this sort of human AI as applied to human problems; it is the process of building social structures by reinforcing the good connections and penalizing the bad. Once you’ve realized you can take this general AI framework and create a human AI, the question becomes, What’s the right way to do that? Is it a safe idea? Is it completely crazy?
My students and I are looking at how people make decisions, on huge databases of financial decisions, business decisions, and many other sorts of decisions. What we’ve found is that humans often make decisions in a way that mimics AI credit-assignment algorithms and works to make the community smarter. A particularly interesting feature of this work is that it addresses a classic problem in evolution known as the group-selection problem. The core of this problem is: How can we select for culture in evolution, when it’s the individuals that reproduce? What you need is something that selects for the best cultures and the best groups but also selects for the best individuals, because they’re the units that transmit the genes.
When you frame the question this way and go through the mathematical literature, you discover that there’s one generally best way to do this. It’s called “distributed Thompson sampling,” a mathematical algorithm used in choosing, out of a set of possible actions with unknown payoffs, the action that maximizes the expected reward in respect to the actions. The key is social sampling, a way of combining evidence, of exploring and exploiting at the same time. It has the unusual property of simultaneously being the best strategy both for the individual and for the group. If you use the group as the basis of selection, and then the group either gets wiped out or reinforced, you’re also selecting for successful individuals. If you select for individuals, and each individual does what’s good for him or her, then that’s automatically the best thing for the group. It’s an amazing alignment of interests and utilities, and it provides real insight into the question of how culture fits into natural selection.
137
HOUSE_OVERSIGHT_016940

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document