HOUSE_OVERSIGHT_016896.jpg

2.49 MB

Extraction Summary

3
People
3
Organizations
0
Locations
3
Events
1
Relationships
3
Quotes

Document Information

Type: Academic/scientific manuscript page (likely from a book or paper on artificial intelligence)
File Size: 2.49 MB
Summary

A single page (page 93) from a scientific text discussing Artificial Intelligence, specifically 'inverse reinforcement learning' and 'generative models' of human cognition. It provides historical context by referencing Norbert Wiener, Herbert Simon, and Allen Newell, and their contributions to early AI development. The document bears a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was part of a document production to the US House Committee on Oversight, likely related to investigations into Jeffrey Epstein's funding of scientific research or connections to academia.

People (3)

Name Role Context
Norbert Wiener Scientist/Author
Mentioned as hinting at reinforcement learning ideas in the 1950s and author of 'The Human Use of Human Beings'.
Herbert Simon Scientist/Researcher
Of Carnegie Tech; co-developer of 'Logic Theorist', the first computational model of human cognition.
Allen Newell Scientist/Researcher
Of the RAND Corporation; co-developer of 'Logic Theorist'.

Organizations (3)

Name Type Context
Carnegie Tech
Affiliation of Herbert Simon (now Carnegie Mellon University).
RAND Corporation
Affiliation of Allen Newell.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_016896'.

Timeline (3 events)

1950s
Norbert Wiener hinted at reinforcement learning ideas.
N/A
Unknown (Historical)
Publication of 'The Human Use of Human Beings'.
N/A
Unknown (Historical)
Development of 'Logic Theorist', the first artificial-intelligence system.
Carnegie Tech / RAND Corporation

Relationships (1)

Herbert Simon Professional Collaborators Allen Newell
Co-developed Logic Theorist together.

Key Quotes (3)

"Inverse reinforcement learning turns this approach around: By observing the actions of an intelligent agent that has already learned effective strategies, we can infer the rewards that led to the development of those strategies."
Source
HOUSE_OVERSIGHT_016896.jpg
Quote #1
"Historically, the search for computational models of human cognition is intimately intertwined with the history of artificial intelligence itself."
Source
HOUSE_OVERSIGHT_016896.jpg
Quote #2
"Logic Theorist, the first computational model of human cognition and also the first artificial-intelligence system, was developed by Herbert Simon, of Carnegie Tech, and Allen Newell, of the RAND Corporation."
Source
HOUSE_OVERSIGHT_016896.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,702 characters)

learning system can be trained to follow strategies that produce those outcomes. Wiener hinted at this idea in the 1950s, but the intervening decades have developed it into a fine art. Modern machine-learning systems can find extremely effective strategies for playing computer games—from simple arcade games to complex real-time strategy games—by applying reinforcement-learning algorithms. Inverse reinforcement learning turns this approach around: By observing the actions of an intelligent agent that has already learned effective strategies, we can infer the rewards that led to the development of those strategies.
In its simplest form, inverse reinforcement learning is something people do all the time. It’s so common that we even do it unconsciously. When you see a co-worker go to a vending machine filled with potato chips and candy and buy a packet of unsalted nuts, you infer that your co-worker (1) was hungry and (2) prefers healthy food. When an acquaintance clearly sees you and then tries to avoid encountering you, you infer that there’s some reason they don’t want to talk to you. When an adult spends a lot of time and money in learning to play the cello, you infer that they must really like classical music—whereas inferring the motives of a teenage boy learning to play an electric guitar might be more of a challenge.
Inverse reinforcement learning is a statistical problem: We have some data—the behavior of an intelligent agent—and we want to evaluate various hypotheses about the rewards underlying that behavior. When faced with this question, a statistician thinks about the generative model behind the data: What data would we expect to be generated if the intelligent agent was motivated by a particular set of rewards? Equipped with the generative model, the statistician can then work backward: What rewards would likely have caused the agent to behave in that particular way?
If you’re trying to make inferences about the rewards that motivate human behavior, the generative model is really a theory of how people behave—how human minds work. Inferences about the hidden causes behind the behavior of other people reflect a sophisticated model of human nature that we all carry around in our heads. When that model is accurate, we make good inferences. When it’s not, we make mistakes. For example, a student might infer that his professor is indifferent to him if the professor doesn’t immediately respond to his email—a consequence of the student’s failure to realize just how many emails that professor receives.
Automated intelligent systems that will make good inferences about what people want must have good generative models for human behavior: that is, good models of human cognition expressed in terms that can be implemented on a computer. Historically, the search for computational models of human cognition is intimately intertwined with the history of artificial intelligence itself. Only a few years after Norbert Wiener published The Human Use of Human Beings, Logic Theorist, the first computational model of human cognition and also the first artificial-intelligence system, was developed by Herbert Simon, of Carnegie Tech, and Allen Newell, of the RAND Corporation. Logic Theorist automatically produced mathematical proofs by emulating the strategies used by human mathematicians.
The challenge in developing computational models of human cognition is making models that are both accurate and generalizable. An accurate model, of course, predicts human behavior with a minimum of errors. A generalizable model can make predictions across a wide range of circumstances, including circumstances unanticipated by its
93
HOUSE_OVERSIGHT_016896

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document