HOUSE_OVERSIGHT_016919.jpg

2.56 MB

Extraction Summary

4
People
4
Organizations
1
Locations
3
Events
2
Relationships
4
Quotes

Document Information

Type: Government document / evidence (page from a book or report included in house oversight files)
File Size: 2.56 MB
Summary

This document is page 116 of a larger text, stamped with 'HOUSE_OVERSIGHT_016919', indicating it is part of a congressional investigation (likely related to Epstein's ties to scientific funding/institutions like MIT). The text itself is a historical narrative detailing the origins of digital computing and Artificial Intelligence (AI). It focuses on the work of Claude Shannon at MIT and Bell Labs, and John von Neumann at the Institute for Advanced Study, specifically covering the transition from analog to digital systems, error correction thresholds, and the exponential scaling of data processing.

People (4)

Name Role Context
Claude Shannon Scientist/Researcher
Graduate student at MIT, worked at Bell Labs; laid foundations for the digital revolution and digital logic.
Vannevar Bush Scientist/Supervisor
Oversaw Shannon at MIT; associated with the Differential Analyzer.
John von Neumann Scientist/Mathematician
Cameo in 'The Human Use of Human Beings'; seminal role in digitizing computation; met Shannon at Princeton.
Norbert Wiener Author/Scientist
Referred to as 'Wiener'; implied author of 'The Human Use of Human Beings'.

Timeline (3 events)

1937
Shannon wrote his master's thesis showing how electrical circuits could evaluate logical expressions.
MIT
1948
Shannon showed that communicating with symbols rather than continuous quantities changes behavior (Digital Communications).
Bell Labs
1952
Von Neumann presented a result corresponding to Shannon's for computation regarding reliable computing with unreliable devices.
N/A

Locations (1)

Location Context

Relationships (2)

Claude Shannon Professional/Academic Vannevar Bush
As a graduate student at MIT, he worked for Vannevar Bush on the Differential Analyzer.
Claude Shannon Professional/Academic John von Neumann
they had met at the Institute for Advanced Study, in Princeton

Key Quotes (4)

"He was laying the foundations for the digital revolution."
Source
HOUSE_OVERSIGHT_016919.jpg
Quote #1
"Shannon’s frustration with the difficulty of solving problems this way led him in 1937 to write what might be the best master’s thesis ever."
Source
HOUSE_OVERSIGHT_016919.jpg
Quote #2
"This exponential decrease in communication errors made possible an exponential increase in the capacity of communication networks."
Source
HOUSE_OVERSIGHT_016919.jpg
Quote #3
"That’s what makes it possible to have a billion transistors in a computer chip, with the last one as useful as the first one."
Source
HOUSE_OVERSIGHT_016919.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,875 characters)

something much more significant than speculating at the time: He was laying the foundations for the digital revolution. As a graduate student at MIT, he worked for Vannevar Bush on the Differential Analyzer. This was one of the last great analog computers, a room full of gears and shafts. Shannon’s frustration with the difficulty of solving problems this way led him in 1937 to write what might be the best master’s thesis ever. In it, he showed how electrical circuits could be designed to evaluate arbitrary logical expressions, introducing the basis for universal digital logic.
After MIT, Shannon studied communications at Bell Labs. Analog telephone calls degraded with distance; the farther they traveled, the worse they sounded. Rather than continue to improve them incrementally, Shannon showed in 1948 that by communicating with symbols rather than continuous quantities, the behavior is very different. Converting speech waveforms to the binary values of 1 and 0 is an example, but many other sets of symbols can be (and are) used in digital communications. What matters is not the particular symbols but rather the ability to detect and correct errors. Shannon found that if the noise is above a threshold (which depends on the system design), then there are certain to be errors. But if the noise is below a threshold, then a linear increase in the physical resources representing the symbol results in an exponential decrease in the likelihood of making an error in correctly receiving the symbol. This relationship was the first of what we’d now call a threshold theorem.
Such scaling falls off so quickly that the probability of an error can be so small as to effectively never happen. Each symbol sent multiplies rather than adds to the certainty, so that the probability of a mistake can go from 0.1 to 0.01 to 0.001, and so forth. This exponential decrease in communication errors made possible an exponential increase in the capacity of communication networks. And that eventually solved the problem of where the knowledge in an AI system came from.
For many years, the fastest way to speed up a computation was to do nothing—just wait for computers to get faster. In the same way, there were years of AI projects that aimed to accumulate everyday knowledge by laboriously entering pieces of information. That didn’t scale; it could progress only as fast as the number of people doing the entering. But when phone calls, newspaper stories, and mail messages all moved onto the Internet, everyone doing any of those things became a data generator. The result was an exponential rather than a linear rate of knowledge accumulation.
John von Neumann also has a cameo in The Human Use of Human Beings, for game theory. What Wiener missed here was von Neumann’s seminal role in digitizing computation. Whereas analog communication degraded with distance, analog computing (like the Differential Analyzer) degraded with time, accumulating errors as it progressed. Von Neumann presented in 1952 a result corresponding to Shannon’s for computation (they had met at the Institute for Advanced Study, in Princeton), showing that it was possible to compute reliably with an unreliable computing device by using symbols rather than continuous quantities. This was, again, a scaling argument, with a linear increase in the physical resources representing the symbol resulting in an exponential reduction in the error rate as long as the noise was below a threshold. That’s what makes it possible to have a billion transistors in a computer chip, with the last one as useful as the first one. This relationship led to an exponential increase in computing performance, which solved a second problem in AI: how to process exponentially increasing amounts of data.
The third problem that scaling solved for AI was coming up with the rules for
116
HOUSE_OVERSIGHT_016919

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document