HOUSE_OVERSIGHT_016920.jpg

2.44 MB

Extraction Summary

2
People
0
Organizations
0
Locations
1
Events
1
Relationships
4
Quotes

Document Information

Type: Book page / manuscript excerpt (house oversight evidence)
File Size: 2.44 MB
Summary

This document appears to be page 117 of a book or essay discussing the philosophy and technical evolution of Artificial Intelligence (AI), specifically deep learning and neural networks. It covers concepts such as the 'curse of dimensionality,' the shift from imperative to generative design, and the 'black box' nature of AI decision-making. The page is stamped 'HOUSE_OVERSIGHT_016920', indicating it is part of a production of documents for a congressional investigation, likely related to Jeffrey Epstein's ties to the scientific community or academia.

People (2)

Name Role Context
Wiener Scientist/Mathematician
Referenced regarding the role of feedback in machine learning.
The Author ('I') Narrator/Manager
Mentions managing a difficult research project pairing data scientists with AI pioneers.

Timeline (1 events)

Unknown (Past)
A research project managed by the author that paired data scientists with AI pioneers.
Unknown
The Author Data Scientists AI Pioneers

Relationships (1)

The Author Professional/Managerial Data Scientists/AI Pioneers
One of the most difficult research projects I’ve managed paired what we’d now call data scientists with AI pioneers.

Key Quotes (4)

"The “deep” part of deep learning refers not to the (hoped-for) depth of insight but to the depth of the mathematical network layers used to make predictions."
Source
HOUSE_OVERSIGHT_016920.jpg
Quote #1
"This is called the curse of dimensionality."
Source
HOUSE_OVERSIGHT_016920.jpg
Quote #2
"What’s the value of a chess-playing computer if you can’t explain how it plays chess? The answer of course is that it can play chess."
Source
HOUSE_OVERSIGHT_016920.jpg
Quote #3
"We come to trust (or not) brains and computer chips alike based on experience that tests them rather than on explanations for how they work."
Source
HOUSE_OVERSIGHT_016920.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,706 characters)

reasoning without having to hire a programmer for each problem. Wiener recognized the role of feedback in machine learning, but he missed the key role of representation. It’s not possible to store all possible images in a self-driving car, or all possible sounds in a conversational computer; they have to be able to generalize from experience. The “deep” part of deep learning refers not to the (hoped-for) depth of insight but to the depth of the mathematical network layers used to make predictions. It turned out that a linear increase in network complexity led to an exponential increase in the expressive power of the network.
If you lose your keys in a room, you can search for them. If you’re not sure which room they’re in, you have to search all the rooms in a building. If you’re not sure which building they’re in, you have to search all the rooms in all the buildings in a city. If you’re not sure which city they’re in, you have to search all the rooms in all the buildings in all the cities. In AI, finding the keys corresponds to things like a car safely following the road, or a computer correctly interpreting a spoken command, and the rooms and buildings and cities correspond to all of the options that have to be considered. This is called the curse of dimensionality.
The solution to the curse of dimensionality came in using information about the problem to constrain the search. The search algorithms themselves are not new. But when applied to a deep-learning network, they adaptively build up representations of where to search. The price of this is that it’s no longer possible to exactly solve for the best answer to a problem, but typically all that’s needed is an answer that’s good enough.
Taken together, it shouldn’t be surprising that these scaling laws have allowed machines to become effectively as capable as the corresponding stages of biological complexity. Neural networks started out with a goal of modeling how the brain works. That goal was abandoned as they evolved into mathematical abstractions unrelated to how neurons actually function. But now there’s a kind of convergence that can be thought of as forward- rather than reverse-engineering biology, as the results of deep learning echo brain layers and regions.
One of the most difficult research projects I’ve managed paired what we’d now call data scientists with AI pioneers. It was a miserable experience in moving goalposts. As the former progressed in solving long-standing problems posed by the latter, this was deemed to not count because it wasn’t accompanied by corresponding leaps in understanding the solutions. What’s the value of a chess-playing computer if you can’t explain how it plays chess?
The answer of course is that it can play chess. There is interesting emerging research that is applying AI to AI—that is, training networks to explain how they operate. But both brains and computer chips are hard to understand by watching their inner workings; they’re easily interpreted only by observing their external interfaces. We come to trust (or not) brains and computer chips alike based on experience that tests them rather than on explanations for how they work.
Many branches of engineering are making a transition from what’s called imperative to declarative or generative design. This means that instead of explicitly designing a system with tools like CAD files, circuit schematics, and computer code, you describe what you want the system to do and then an automated search is done for designs that satisfy your goals and restrictions. This approach becomes necessary as design complexity exceeds what can be understood by a human designer. While that
117
HOUSE_OVERSIGHT_016920

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document