HOUSE_OVERSIGHT_025958.jpg

2.54 MB

Extraction Summary

2
People
4
Organizations
0
Locations
1
Events
1
Relationships
4
Quotes

Document Information

Type: Email correspondence
File Size: 2.54 MB
Summary

This document is a page from an email written by 'Joscha' (likely cognitive scientist Joscha Bach) discussing the state of Artificial Intelligence and machine learning. The text covers topics such as universal cortical learning, Google's image recognition errors, DeepMind's reinforcement learning in Atari games, and criticisms of Noam Chomsky's linguistic theories. The author also references their own presentation at a NIPS workshop regarding 'request-confirmation networks' and makes a controversial claim correlating race, motor development, and IQ.

People (2)

Name Role Context
Joscha Sender
Cognitive scientist/AI researcher explaining machine learning concepts; likely Joscha Bach based on the name and subj...
Noam Subject of discussion
Referenced regarding his criticism of machine translation (likely Noam Chomsky).

Organizations (4)

Name Type Context
Google
Mentioned regarding image recognition apps, Latent Semantic Analysis models, and the acquisition of DeepMind.
DeepMind
Mentioned as having been acquired by Google for 500M for their Atari game learning feat.
NIPS
Conference where Joscha introduced 'request-confirmation networks'.
WSJ
Wall Street Journal, referenced in a URL.

Timeline (1 events)

December (Year prior to email)
NIPS workshop
Unknown

Relationships (1)

Joscha Professional/Academic Noam (Chomsky)
Joscha analyzes and critiques Noam's theories on machine translation.

Key Quotes (4)

"In humans, it is reflected for instance by the fact that races with faster motor development have lower IQ."
Source
HOUSE_OVERSIGHT_025958.jpg
Quote #1
"Google has built automatic image recognition into their current photo app"
Source
HOUSE_OVERSIGHT_025958.jpg
Quote #2
"Noam's criticism of machine translation mostly applies to the Latent Semantic Analysis models"
Source
HOUSE_OVERSIGHT_025958.jpg
Quote #3
"The 'request-confirmation networks' that I have introduced at a NIPS workshop in last the December are an attempt at modeling how the higher layers might self-organize into cognitive programs."
Source
HOUSE_OVERSIGHT_025958.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (3,293 characters)

understood, and everything else is something the universal cortical learning figures out on its own.
This is a hypothesis that is shared by a growing number of people these days. In humans, it is reflected for instance by the fact that races with faster motor development have lower IQ. (In individuals of the same group, slower development often indicates defects, of course.)
Another support comes from machine learning: we find that the same learning functions can learn visual and auditory pattern recognition, and even end-to-end-learning. Google has built automatic image recognition into their current photo app:
http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/
The state of the art in research can do better than that: it can begin to "imagine" things. I.e. when the experimenter asks the system to "dream" what a certain object looks like, the system can produce a somewhat compelling image, which indicates that it is indeed learning visual structure. This stuff is something nobody could do a few months ago:
http://www.creativeai.net/posts/Mv4WG6rdzAerZF7ch/synthesizing-preferred-inputs-via-deep-generator-networks
A machine learning program that can learn how to play an Atari game without any human supervision or hand-crafted engineering (the feat that gave DeepMind 500M from Google) now just takes about 130 lines of Python code.
These models do not have interesting motivational systems, and a relatively simple architecture. They currently seem to mimic some of the stuff that goes on in the first few layers of the cortex. They learn object features, visual styles, lighting and rotation in 3d, and simple action policies. Almost everything else is missing. But there is a lot of enthusiasm that the field might be on the right track, and that we can learn motor simulations and intuitive physics soon. (The majority of the people in AI do not work on this, however. They try to improve the performance for the current benchmarks.)
Noam's criticism of machine translation mostly applies to the Latent Semantic Analysis models that Google and others have been using for many years. These models map linguistic symbols to concepts, and relate concepts to each other, but they do not relate the concepts to "proper" mental representations of what objects and processes look like and how they interact. Concepts are probably one of the top layers of the learning hierarchy, i.e. they are acquired *after* we learn to simulate a mental world, not before. Classical linguists ignored the simulation of a mental world entirely.
It seems miraculous that purely conceptual machine translation works at all, but that is because concepts are shared between speakers, so the structure of the conceptual space can be inferred from the statistics of language use. But the statistics of language use have too little information to infer what objects look like and how they interact.
My own original ideas concern a few parts of the emerging understanding of what the brain does. The "request-confirmation networks" that I have introduced at a NIPS workshop in last the December are an attempt at modeling how the higher layers might self-organize into cognitive programs.
Cheers!
Joscha
HOUSE_OVERSIGHT_025958

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document