HOUSE_OVERSIGHT_016404.jpg

Extraction Summary

5
People
2
Organizations
0
Locations
3
Events
2
Relationships
4
Quotes

Document Information

Type: Manuscript page / essay excerpt
File Size:
Summary

This document appears to be page 184 of a manuscript or essay, likely written by Stephen Wolfram. It details the history of Artificial Intelligence, citing the stagnation caused by Minsky and Papert's work on perceptrons, the rise and fall of expert systems, and the narrator's personal journey in developing Wolfram|Alpha between 2002 and 2003 based on computational knowledge rather than brain simulation. The document bears a House Oversight stamp, indicating it was part of a production of documents, likely related to the investigation into Jeffrey Epstein's connections with scientists.

People (5)

Name Role Context
Frank Rosenblatt Inventor
Invented the perceptron (one-layer neural network).
Marvin Minsky Author/Researcher
Co-wrote the book 'Perceptrons' in the late sixties.
Seymour Papert Author/Researcher
Co-wrote the book 'Perceptrons' in the late sixties.
von Neumann Mathematician/Computer Scientist
Mentioned regarding the origins of computing and neural networks.
Narrator (Implied Stephen Wolfram) Author/Creator
First-person narrator discussing the creation of Wolfram|Alpha and their history with AI research.

Organizations (2)

Name Type Context
Wolfram|Alpha
Computational knowledge engine created by the narrator.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_016404'.

Timeline (3 events)

Early seventies
Failure/crash of the symbolic AI approach.
N/A
Late sixties
Publication of the book 'Perceptrons' by Minsky and Papert.
N/A
Mid-2002 to 2003
Narrator revisited the concept of a computational knowledge system, leading to a change in perspective.
N/A
Narrator

Relationships (2)

Marvin Minsky Co-authors Seymour Papert
wrote a book titled Perceptrons
Narrator Creator/Invention Wolfram|Alpha
This insight is what led to Wolfram|Alpha.

Key Quotes (4)

"This insight is what led to Wolfram|Alpha."
Source
HOUSE_OVERSIGHT_016404.jpg
Quote #1
"I had assumed that there was some magic mechanism that made us vastly more capable than anything that was just computational. But that assumption was wrong."
Source
HOUSE_OVERSIGHT_016404.jpg
Quote #2
"People said, 'These guys have written a proof that neural networks can’t do anything interesting, therefore no neural networks can do anything interesting, so let’s forget about neural networks.'"
Source
HOUSE_OVERSIGHT_016404.jpg
Quote #3
"Now I realized that there wasn’t a bright line between what is intelligent and what is simply computational."
Source
HOUSE_OVERSIGHT_016404.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,624 characters)

von Neumann and others on computers came directly not from Turing Machines but
through this bypath of neural networks.
But simple neural networks didn’t do much. Frank Rosenblatt invented a learning
device he called the perceptron, which was a one-layer neural network. In the late sixties,
Marvin Minsky and Seymour Papert wrote a book titled Perceptrons, in which they
basically proved that perceptrons couldn’t do anything interesting, which is correct.
Perceptrons could only make linear distinctions between things. So the idea was more or
less dropped. People said, “These guys have written a proof that neural networks can’t
do anything interesting, therefore no neural networks can do anything interesting, so let’s
forget about neural networks.” That attitude persisted for some time.
Meanwhile, there were a couple of other approaches to AI. One was based on
understanding, at a formal level, symbolically, how the world works; and the other was
based on doing statistics and probabilistic kinds of things. With regard to symbolic AI,
one of the test cases was, Can we teach a computer to do calculus? Can we teach a computer to do something like integrals? Can
we teach a computer to do calculus? There were tasks like machine translation, which
people thought would be a good example of what computers could do. The bottom line is
that by the early seventies, that approach had crashed.
Then there was a trend toward devices called expert systems, which arose in the
late seventies and early eighties. The idea was to have a machine learn the rules that an
expert uses and thereby figure out what to do. That petered out. After that, AI became
little more than a crazy pursuit.
~~~
I had been interested in how you make an AI-like machine since I was a kid. I was
interested particularly in how you take the knowledge we humans have accumulated in
our civilization and automate answering questions on the basis of that knowledge. I
thought about how you could do that symbolically, by building a system that could break
down questions into symbolic units and answer them. I worked on neural networks at
that time and didn’t make much progress, so I put it aside for a while.
Back in mid-2002 to 2003, I thought about that question again: What does it take
to make a computational knowledge system? The work I’d done by then pretty much
showed that my original belief about how to do this was completely wrong. My original
belief had been that in order to make a serious computational knowledge system, you first
had to build a brainlike device and then feed it knowledge—just as humans learn in
standard education. Now I realized that there wasn’t a bright line between what is
intelligent and what is simply computational.
I had assumed that there was some magic mechanism that made us vastly more
capable than anything that was just computational. But that assumption was wrong. This
insight is what led to Wolfram|Alpha. What I discovered is that you can take a large
collection of the world’s knowledge and automatically answer questions on the basis of
it, using what are essentially merely computational techniques. It was an alternative way
to do engineering—a way that’s much more analogous to what biology does in evolution.
In effect, what you normally do when you build a program is build it step-by-step.
But you can also explore the computational universe and mine technology from that
universe. Typically, the challenge is the same as in physical mining: That is, you find a
supply of, let’s say, iron, or cobalt, or gadolinium, with some special magnetic properties,
184
HOUSE_OVERSIGHT_016404

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document