HOUSE_OVERSIGHT_016405.jpg

Extraction Summary

2
People
3
Organizations
0
Locations
2
Events
2
Relationships
3
Quotes

Document Information

Type: Testimony or transcript page
File Size:
Summary

The text discusses the evolution of artificial intelligence and computer languages, emphasizing the shift from low-level machine instructions to knowledge-based languages that align with human thinking. The speaker highlights the limitations of the Turing Test, noting that systems like Wolfram|Alpha fail because they are too knowledgeable compared to humans. Additionally, the text touches on advancements in visual object identification using neural network technology rooted in concepts from the 1940s and 1980s.

People (2)

Name Role Context
McCulloch
Pitts

Organizations (3)

Timeline (2 events)

Development of neural-network technology (1943)
Development of OCR (1980s)

Relationships (2)

Key Quotes (3)

"My approach was to make a language that panders not to the computers but to the humans"
Source
HOUSE_OVERSIGHT_016405.jpg
Quote #1
"In that sense, we’ve already achieved good AI, at that level."
Source
HOUSE_OVERSIGHT_016405.jpg
Quote #2
"people who’ve tried connecting, for example, Wolfram|Alpha to their Turing Test bots find that the bots lose every time."
Source
HOUSE_OVERSIGHT_016405.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,769 characters)

and you turn that special capability to a human purpose, to something you want
technology to do. In the case of magnetic materials, there are plenty of ways to do that.
In terms of programs, it’s the same story. There are all kinds of programs out there, even
tiny programs that do complicated things. Could we entrain them for some useful human
purpose?
And how do you get AIs to execute your goals? One answer is to just talk to
them, in the natural language of human utterances. It works pretty well when you’re
talking to Siri. But when you want to say something longer and more complicated, it
doesn’t work well. You need a computer language that can represent sophisticated
concepts in a way that can be progressively built up and isn’t possible in natural
language. What my company spent a lot of time doing was building a knowledge-based
language that incorporates the knowledge of the world directly into the language. The
traditional approach to creating a computer language is to make a language that
represents operations that computers intrinsically know how to do: allocating memory,
setting values of variables, iterating things, changing program counters, and so on.
Fundamentally, you’re telling computers to do things in your own terms. My approach
was to make a language that panders not to the computers but to the humans, to take
whatever a human thinks of and convert it into some form that the computer can
understand. Could we encapsulate the knowledge we’d accumulated, both in science and
in data collection, into a language we could use to communicate with computers? That’s
the big achievement of my last thirty years or so—being able to do that.
Back in the 1960s, people would say things like, “When we can do such-and-
such, we’ll know we have AI. When we can do an integral from a calculus course, we’ll
know we have AI. When we can have a conversation with a computer and make it seem
human…,” et cetera. The difficulty was, “Well, gosh, the computer just doesn’t know
enough about the world.” You’d ask the computer what day of the week it was, and it
might be able to answer that. You’d ask it who the President was, and it probably
couldn’t tell you. At that point, you’d know you were talking to a computer and not a
person. But now when it comes to these Turing Tests, people who’ve tried connecting,
for example, Wolfram|Alpha to their Turing Test bots find that the bots lose every time.
Because all you have to do is start asking the machine sophisticated questions and it will
answer them! No human can do that. By the time you’ve asked it a few disparate
questions, there will be no human who knows all those things, yet the system will know
them. In that sense, we’ve already achieved good AI, at that level.
Then there are certain kinds of tasks easy for humans but traditionally very hard
for machines. The standard one is visual object identification: What is this object?
Humans can recognize it and give some simple description of it, but a computer was just
hopeless at that. A couple of years ago, though, we brought out a little image-
identification system, and many other companies have done something similar—ours
happens to be somewhat better than the rest. You show it an image, and for about ten
thousand kinds of things, it will tell you what it is. It’s fun to show it an abstract painting
and see what it says. But it does a pretty good job.
It works using the same neural-network technology that McCulloch and Pitts
imagined in 1943 and lots of us worked on in the early eighties. Back in the 1980s,
people successfully did OCR—optical character recognition. They took the twenty-six
letters of the alphabet and said, “OK, is that an A? Is that a B? Is that a C?” and so on.
185
HOUSE_OVERSIGHT_016405

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document