HOUSE_OVERSIGHT_016376.jpg

Extraction Summary

4
People
2
Organizations
0
Locations
0
Events
2
Relationships
3
Quotes

Document Information

Type: Academic/scientific text (book or article page) included in house oversight production
File Size:
Summary

This document appears to be page 156 from a book or academic paper discussing Artificial Intelligence, specifically comparing bottom-up vs. top-down machine learning approaches and contrasting them with human cognitive development in children. It details experiments regarding 'blicket detectors' and references a 2015 paper by A. Gopnik, T. Griffiths, and C. Lucas. While the document bears a 'HOUSE_OVERSIGHT' Bates stamp, suggesting it was part of a government production (possibly related to Epstein's scientific funding or associations), the text itself is purely academic and contains no direct references to Jeffrey Epstein or his associates.

People (4)

Name Role Context
Lake et al. Researcher
Researchers who gave a program a general model of how to draw a character.
A. Gopnik Author/Researcher
Cited in footnote 38; likely the author of the main text referring to 'our lab'.
T. Griffiths Researcher
Cited in footnote 38 as co-author.
C. Lucas Researcher
Cited in footnote 38 as co-author.

Organizations (2)

Name Type Context
Google
Mentioned in the context of 'Google Translate'.
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT'.

Relationships (2)

A. Gopnik Co-authors T. Griffiths
Cited together in footnote 38.
A. Gopnik Co-authors C. Lucas
Cited together in footnote 38.

Key Quotes (3)

"The recent success of AI is partly the result of extensions of those old ideas."
Source
HOUSE_OVERSIGHT_016376.jpg
Quote #1
"But the truly remarkable thing about human children is that they somehow combine the best features of each approach and then go way beyond them."
Source
HOUSE_OVERSIGHT_016376.jpg
Quote #2
"Google Translate works because it takes advantage of millions of human translations and generalizes them to a new piece of text, rather than genuinely understanding the sentences themselves."
Source
HOUSE_OVERSIGHT_016376.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,632 characters)

The bottom-up method for recognizing handwritten characters is to give the
computer thousands of examples of each one and let it pull out the salient features.
Instead, Lake et al. gave the program a general model of how you draw a character: A
stroke goes either right or left; after you finish one, you start another; and so on. When
the program saw a particular character, it could infer the sequence of strokes that were
most likely to have led to it—just as I inferred that the spam process led to my dubious
email. Then it could judge whether a new character was likely to result from that
sequence or from a different one, and it could produce a similar set of strokes itself. The
program worked much better than a deep-learning program applied to exactly the same
data, and it closely mirrored the performance of human beings.
These two approaches to machine learning have complementary strengths and
weaknesses. In the bottom-up approach, the program doesn’t need much knowledge to
begin with, but it needs a great deal of data, and it can generalize only in a limited way.
In the top-down approach, the program can learn from just a few examples and make
much broader and more varied generalizations, but you need to build much more into it to
begin with. A number of investigators are currently trying to combine the two
approaches, using deep learning to implement Bayesian inference.
The recent success of AI is partly the result of extensions of those old ideas. But
it has more to do with the fact that, thanks to the Internet, we have much more data, and
thanks to Moore’s Law we have much more computational power to apply to that data.
Moreover, an unappreciated fact is that the data we do have has already been sorted and
processed by human beings. The cat pictures posted to the Web are canonical cat
pictures—pictures that humans have already chosen as “good” pictures. Google
Translate works because it takes advantage of millions of human translations and
generalizes them to a new piece of text, rather than genuinely understanding the
sentences themselves.
But the truly remarkable thing about human children is that they somehow
combine the best features of each approach and then go way beyond them. Over the past
fifteen years, developmentalists have been exploring the way children learn structure
from data. Four-year-olds can learn by taking just one or two examples of data, as a top-
down system does, and generalizing to very different concepts. But they can also learn
new concepts and models from the data itself, as a bottom-up system does.
For example, in our lab we give young children a “blicket detector”—a new
machine to figure out, one they’ve never seen before. It’s a box that lights up and plays
music when you put certain objects on it but not others. We give children just one or two
examples of how the machine works, showing them that, say, two red blocks make it go,
while a green-and-yellow combination doesn’t. Even eighteen-month-olds immediately
figure out the general principle that the two objects have to be the same to make it go,
and they generalize that principle to new examples: For instance, they will choose two
objects that have the same shape to make the machine work. In other experiments, we’ve
shown that children can even figure out that some hidden invisible property makes the
machine go, or that the machine works on some abstract logical principle.38
38 A. Gopnik, T. Griffiths & C. Lucas, “When younger learners can be better (or at least more open-
minded) than older ones,” Curr. Dir. Psychol. Sci., 24:2, 87-92 (2015).
156
HOUSE_OVERSIGHT_016376

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document