doing because we designed the algorithms at their heart. So when our computers
generate a result, we feel that we intellectually grasp it.
The new machine-learning programs are different. Having recognized patterns
via deep neural networks, they come up with conclusions, and we have no idea exactly
how. When they uncover relationships, we don’t understand it in the same way as if we
had deduced those relationships ourselves using an underlying theoretical framework. As
data sets become larger, we won’t be able to analyze them ourselves even with the help
of computers; rather, we will rely entirely on computers to do the analysis for us. So if
someone asks us how we know something, we will simply say it is because the machine
analyzed the data and produced the conclusion.
One day a computer may well come up with an entirely new result—e.g., a
mathematical theorem whose proof, or even whose statement, no human can understand.
That is philosophically different from the way we have been doing science. Or at least
thought we had; some might argue that we don’t know how our own brains reach
conclusions either, and that these new methods are a way of mimicking learning by the
human brain. Nevertheless, I find this potential loss of understanding disturbing.
Despite the remarkable advances in computing, the hype about AGI—a general-
intelligence machine that will think like a human and possibly develop consciousness—
smacks of science fiction to me, partly because we don’t understand the brain at that level
of detail. Not only do we not understand what consciousness is, we don’t even
understand a relatively simple problem like how we remember a phone number. In just
that one question, there are all sorts of things to consider. How do we know it is a
number? How do we associate it with a person, a name, face, and other characteristics?
Even such seemingly trivial questions involve everything from high-level cognition and
memory to how a cell stores information and how neurons interact.
Moreover, that’s just one task among many that the brain does effortlessly.
Whereas machines will no doubt do ever more amazing things, they’re unlikely to be a
replacement for human thought and human creativity and vision. Eric Schmidt, former
chairman of Google’s parent company, said in a recent interview at the London Science
Museum that even designing a robot that would clear the table, wash the dishes, and put
them away was a huge challenge. The calculations involved in figuring out all the
movements the body has to make to throw a ball accurately or do slalom skiing are
prodigious. The brain can do all these and also do mathematics and music, and invent
games like chess and Go, not just play them. We tend to underestimate the complexity
and creativity of the human brain and how amazingly general it is.
If AI is to become more humanlike in its abilities, the machine-learning and
neuroscience communities need to interact closely, something that is happening already.
Some of today’s greatest exponents of machine learning—such as Geoffrey Hinton,
Zoubin Ghahramani, and Demis Hassabis—have backgrounds in cognitive neuroscience,
and their success has been at least in part due to attempts to model brainlike behavior in
their algorithms. At the same time, neurobiology has also flourished. All sorts of tools
have been developed to watch which neurons are firing and genetically manipulate them
and see what’s happening in real time with inputs. Several countries have launched
moon-shot neuroscience initiatives to see if we can crack the workings of the brain.
Advances in AI and neuroscience seem to go hand in hand; each field can propel the
other.
132
HOUSE_OVERSIGHT_016352
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document