HOUSE_OVERSIGHT_016828.jpg

2.21 MB

Extraction Summary

2
People
3
Organizations
1
Locations
0
Events
1
Relationships
5
Quotes

Document Information

Type: Essay / scientific article / evidence document
File Size: 2.21 MB
Summary

This document is page 25 of a larger file produced by the House Oversight Committee (Bates stamp HOUSE_OVERSIGHT_016828). It contains an essay titled 'The Limitations of Opaque Learning Machines' by Judea Pearl, a UCLA professor. The text discusses the shortcomings of deep learning and AI opacity compared to causal reasoning and transparency. While the document is part of an Epstein-related release, the text itself is purely academic/scientific in nature, likely collected because Epstein cultivated relationships with prominent scientists and intellectuals.

People (2)

Name Role Context
Judea Pearl Author / Professor
Professor of computer science and director of the Cognitive Systems Laboratory at UCLA; author of the essay.
Dana Mackenzie Co-author
Co-authored 'The Book of Why' with Judea Pearl.

Organizations (3)

Name Type Context
UCLA
University where Judea Pearl is a professor.
Cognitive Systems Laboratory
Laboratory at UCLA directed by Judea Pearl.
House Oversight Committee
Source of the document (indicated by Bates stamp HOUSE_OVERSIGHT).

Locations (1)

Location Context
Academic institution location.

Relationships (1)

Judea Pearl Co-authors Dana Mackenzie
His most recent book, co-authored with Dana Mackenzie, is The Book of Why...

Key Quotes (5)

"We are losing this transparency now, with the deep-learning style of machine learning."
Source
HOUSE_OVERSIGHT_016828.jpg
Quote #1
"It is fundamentally a curve-fitting exercise that adjusts weights in intermediate layers of a long input-output chain."
Source
HOUSE_OVERSIGHT_016828.jpg
Quote #2
"I find many users who say that it 'works well and we don’t know why.'"
Source
HOUSE_OVERSIGHT_016828.jpg
Quote #3
"If our robots will all be as opaque as AlphaGo, we won’t be able to hold a meaningful conversation with them, and that would be unfortunate."
Source
HOUSE_OVERSIGHT_016828.jpg
Quote #4
"Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points."
Source
HOUSE_OVERSIGHT_016828.jpg
Quote #5

Full Extracted Text

Complete text extracted from the document (3,323 characters)

THE LIMITATIONS OF OPAQUE LEARNING MACHINES
Judea Pearl
Judea Pearl is a professor of computer science and director of the Cognitive Systems
Laboratory at UCLA. His most recent book, co-authored with Dana Mackenzie, is The
Book of Why: The New Science of Cause and Effect.
As a former physicist, I was extremely interested in cybernetics. Though it did not utilize
the full power of Turing Machines, it was highly transparent, perhaps because it was
founded on classical control theory and information theory. We are losing this
transparency now, with the deep-learning style of machine learning. It is fundamentally a
curve-fitting exercise that adjusts weights in intermediate layers of a long input-output
chain.
I find many users who say that it “works well and we don’t know why.” Once
you unleash it on large data, deep learning has its own dynamics, it does its own repair
and its own optimization, and it gives you the right results most of the time. But when it
doesn’t, you don’t have a clue about what went wrong and what should be fixed. In
particular, you do not know if the fault is in the program, in the method, or because things
have changed in the environment. We should be aiming at a different kind of
transparency.
Some argue that transparency is not really needed. We don’t understand the
neural architecture of the human brain, yet it runs well, so we forgive our meager
understanding and use human helpers to great advantage. In the same way, they argue,
why not unleash deep-learning systems and create intelligence without understanding
how they work? I buy this argument to some extent. I personally don’t like opacity, so I
won’t spend my time on deep learning, but I know that it has a place in the makeup of
intelligence. I know that non-transparent systems can do marvelous jobs, and our brain is
proof of that marvel.
But this argument has its limitation. The reason we can forgive our meager
understanding of how human brains work is because our brains work the same way, and
that enables us to communicate with other humans, learn from them, instruct them, and
motivate them in our own native language. If our robots will all be as opaque as
AlphaGo, we won’t be able to hold a meaningful conversation with them, and that would
be unfortunate. We will need to retrain them whenever we make a slight change in the
task or in the operating environment.
So, rather than experimenting with opaque learning machines, I am trying to
understand their theoretical limitations and examine how these limitations can be
overcome. I do it in the context of causal-reasoning tasks, which govern much of how
scientists think about the world and, at the same time, are rich in intuition and toy
examples, so we can monitor the progress in our analysis. In this context, we’ve
discovered that some basic barriers exist, and that unless they are breached we won’t get
a real human kind of intelligence no matter what we do. I believe that charting these
barriers may be no less important than banging our heads against them.
Current machine-learning systems operate almost exclusively in a statistical, or
model-blind, mode, which is analogous in many ways to fitting a function to a cloud of
data points. Such systems cannot reason about “what if ?” questions and, therefore,
25
HOUSE_OVERSIGHT_016828

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document