HOUSE_OVERSIGHT_016987.jpg

2.37 MB

Extraction Summary

5
People
2
Organizations
0
Locations
2
Events
2
Relationships
4
Quotes

Document Information

Type: Article / book excerpt / interview transcript
File Size: 2.37 MB
Summary

This document appears to be a page from a book or interview transcript (page 184) included in House Oversight documents. It features a first-person narrative, likely by Stephen Wolfram, discussing the history of Artificial Intelligence, neural networks, and the development of his system, Wolfram|Alpha. The text reviews the history of AI from the perceptron to expert systems and details the narrator's shift in thinking regarding computational knowledge systems between 2002 and 2003. There is no direct mention of Jeffrey Epstein on this specific page.

People (5)

Name Role Context
Stephen Wolfram Narrator / Speaker (Implied)
Speaker discussing their work, history with AI, and creation of Wolfram|Alpha. (Name not explicitly in text, but infe...
von Neumann Computer Scientist
Mentioned regarding the origins of computers and neural networks.
Frank Rosenblatt Inventor
Invented the perceptron (one-layer neural network).
Marvin Minsky Author / Scientist
Co-wrote the book 'Perceptrons' in the late sixties.
Seymour Papert Author / Scientist
Co-wrote the book 'Perceptrons' in the late sixties.

Organizations (2)

Name Type Context
Wolfram|Alpha
Computational knowledge engine created by the narrator.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_016987'.

Timeline (2 events)

Late sixties
Marvin Minsky and Seymour Papert wrote the book 'Perceptrons'.
N/A
Mid-2002 to 2003
Narrator revisited the question of how to make a computational knowledge system.
N/A
Narrator (Stephen Wolfram)

Relationships (2)

Marvin Minsky Co-authors Seymour Papert
Marvin Minsky and Seymour Papert wrote a book titled Perceptrons
Stephen Wolfram (Narrator) Creator/Creation Wolfram|Alpha
This insight is what led to Wolfram|Alpha.

Key Quotes (4)

"This insight is what led to Wolfram|Alpha."
Source
HOUSE_OVERSIGHT_016987.jpg
Quote #1
"I had assumed that there was some magic mechanism that made us vastly more capable than anything that was just computational. But that assumption was wrong."
Source
HOUSE_OVERSIGHT_016987.jpg
Quote #2
"Frank Rosenblatt invented a learning device he called the perceptron, which was a one-layer neural network."
Source
HOUSE_OVERSIGHT_016987.jpg
Quote #3
"Marvin Minsky and Seymour Papert wrote a book titled Perceptrons, in which they basically proved that perceptrons couldn’t do anything interesting, which is correct."
Source
HOUSE_OVERSIGHT_016987.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,584 characters)

von Neumann and others on computers came directly not from Turing Machines but
through this bypath of neural networks.
But simple neural networks didn’t do much. Frank Rosenblatt invented a learning
device he called the perceptron, which was a one-layer neural network. In the late sixties,
Marvin Minsky and Seymour Papert wrote a book titled Perceptrons, in which they
basically proved that perceptrons couldn’t do anything interesting, which is correct.
Perceptrons could only make linear distinctions between things. So the idea was more or
less dropped. People said, “These guys have written a proof that neural networks can’t
do anything interesting, therefore no neural networks can do anything interesting, so let’s
forget about neural networks.” That attitude persisted for some time.
Meanwhile, there were a couple of other approaches to AI. One was based on
understanding, at a formal level, symbolically, how the world works; and the other was
based on doing statistics and probabilistic kinds of things. With regard to symbolic AI,
one of the test cases was, Can we teach a computer to do something like integrals? Can
we teach a computer to do calculus? There were tasks like machine translation, which
people thought would be a good example of what computers could do. The bottom line is
that by the early seventies, that approach had crashed.
Then there was a trend toward devices called expert systems, which arose in the
late seventies and early eighties. The idea was to have a machine learn the rules that an
expert uses and thereby figure out what to do. That petered out. After that, AI became
little more than a crazy pursuit.
~~~
I had been interested in how you make an AI-like machine since I was a kid. I was
interested particularly in how you take the knowledge we humans have accumulated in
our civilization and automate answering questions on the basis of that knowledge. I
thought about how you could do that symbolically, by building a system that could break
down questions into symbolic units and answer them. I worked on neural networks at
that time and didn’t make much progress, so I put it aside for a while.
Back in mid-2002 to 2003, I thought about that question again: What does it take
to make a computational knowledge system? The work I’d done by then pretty much
showed that my original belief about how to do this was completely wrong. My original
belief had been that in order to make a serious computational knowledge system, you first
had to build a brainlike device and then feed it knowledge—just as humans learn in
standard education. Now I realized that there wasn’t a bright line between what is
intelligent and what is simply computational.
I had assumed that there was some magic mechanism that made us vastly more
capable than anything that was just computational. But that assumption was wrong. This
insight is what led to Wolfram|Alpha. What I discovered is that you can take a large
collection of the world’s knowledge and automatically answer questions on the basis of
it, using what are essentially merely computational techniques. It was an alternative way
to do engineering—a way that’s much more analogous to what biology does in evolution.
In effect, what you normally do when you build a program is build it step-by-step.
But you can also explore the computational universe and mine technology from that
universe. Typically, the challenge is the same as in physical mining: That is, you find a
supply of, let’s say, iron, or cobalt, or gadolinium, with some special magnetic properties,
184
HOUSE_OVERSIGHT_016987

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document