HOUSE_OVERSIGHT_016357.jpg

Extraction Summary

2
People
0
Organizations
0
Locations
0
Events
0
Relationships
4
Quotes

Document Information

Type: Scientific text / transcript / book excerpt (evidence file)
File Size:
Summary

This document appears to be a page (labeled 137) from a scientific book, essay, or transcript included in House Oversight evidence regarding Jeffrey Epstein. The text, likely authored by a computer scientist or sociologist (the subject matter strongly suggests MIT's Alex Pentland), discusses 'social physics,' comparing human social networks to Artificial Intelligence neural networks. It explores concepts like 'distributed Thompson sampling' and 'group-selection' in evolution to explain how human culture and decision-making function similarly to AI algorithms.

People (2)

Name Role Context
Unnamed Speaker/Author Researcher/Academic
First-person narrator ('My students and I') discussing research on social physics and AI. Likely Alex Pentland based ...
Students Researchers
Mentioned by the author as assisting in analyzing databases of human decisions.

Key Quotes (4)

"This “social physics” works because human behavior is determined as much by the patterns of our culture as by rational, individual thinking."
Source
HOUSE_OVERSIGHT_016357.jpg
Quote #1
"So, what would happen if we replaced the neurons with people?"
Source
HOUSE_OVERSIGHT_016357.jpg
Quote #2
"Culture is the result of this sort of human AI as applied to human problems; it is the process of building social structures by reinforcing the good connections and penalizing the bad."
Source
HOUSE_OVERSIGHT_016357.jpg
Quote #3
"It’s called “distributed Thompson sampling,” a mathematical algorithm used in choosing, out of a set of possible actions with unknown payoffs, the action that maximizes the expected reward in respect to the actions."
Source
HOUSE_OVERSIGHT_016357.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,740 characters)

problems, it often takes only a couple of noisy data points to get something that’s a beautiful description of a phenomenon, because you’re putting in knowledge about how physics works. That’s in huge contrast to normal AI, which requires millions of training examples and is very sensitive to noise. By adding the appropriate background knowledge, you get much more intelligence.
Similar to the physical-systems case, if we make neurons that know a lot about how humans learn from each other, then we can detect human fads and predict human behavior trends in surprisingly accurate and efficient ways. This “social physics” works because human behavior is determined as much by the patterns of our culture as by rational, individual thinking. These patterns can be described mathematically and employed to make accurate predictions.
This idea of a credit-assignment function reinforcing connections between neurons that are doing the best work is the core of current AI. If you make those little neurons smarter, the AI gets smarter. So, what would happen if we replaced the neurons with people? People have lots of capabilities. They know lots of things about the world; they can perceive things in a broadly competent, human way. What would happen if you had a network of people in which you could reinforce the connections that were helping and minimize the connections that weren’t?
That begins to sound like a society, or a company. We all live in a human social network. We’re reinforced for doing things that seem to help everybody and discouraged from doing things that are not appreciated. Culture is the result of this sort of human AI as applied to human problems; it is the process of building social structures by reinforcing the good connections and penalizing the bad. Once you’ve realized you can take this general AI framework and create a human AI, the question becomes, What’s the right way to do that? Is it a safe idea? Is it completely crazy?
My students and I are looking at how people make decisions, on huge databases of financial decisions, business decisions, and many other sorts of decisions. What we’ve found is that humans often make decisions in a way that mimics AI credit-assignment algorithms and works to make the community smarter. A particularly interesting feature of this work is that it addresses a classic problem in evolution known as the group-selection problem. The core of this problem is: How can we select for culture in evolution, when it’s the individuals that reproduce? What you need is something that selects for the best cultures and the best groups but also selects for the best individuals, because they’re the units that transmit the genes.
When you frame the question this way and go through the mathematical literature, you discover that there’s one generally best way to do this. It’s called “distributed Thompson sampling,” a mathematical algorithm used in choosing, out of a set of possible actions with unknown payoffs, the action that maximizes the expected reward in respect to the actions. The key is social sampling, a way of combining evidence, of exploring and exploiting at the same time. It has the unusual property of simultaneously being the best strategy both for the individual and for the group. If you use the group as the basis of selection, and then the group either gets wiped out or reinforced, you’re also selecting for successful individuals. If you select for individuals, and each individual does what’s good for him or her, then that’s automatically the best thing for the group. It’s an amazing alignment of interests and utilities, and it provides real insight into the question of how culture fits into natural selection.
137
HOUSE_OVERSIGHT_016357

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document