You need to sign in or sign up before continuing.

HOUSE_OVERSIGHT_026397.jpg

2.02 MB

Extraction Summary

3
People
4
Organizations
0
Locations
1
Events
2
Relationships
3
Quotes

Document Information

Type: Email
File Size: 2.02 MB
Summary

This document is an email from an individual named Joscha to an unnamed recipient, discussing advanced topics in artificial intelligence and machine learning, referencing Google, DeepMind, and Noam Chomsky. A legal disclaimer at the bottom indicates the communication is the property of 'JEE' (likely Jeffrey E. Epstein) and provides a contact email, 'jeevacation@gmail.com'. The footer 'HOUSE_OVERSIGHT_026397' suggests the document is from a collection related to a U.S. House of Representatives investigation.

People (3)

Name Role Context
Joscha Sender
Author of the email discussing machine learning and AI concepts. Mentions introducing "request-confirmation networks"...
Noam Referenced Individual
Likely Noam Chomsky. His criticism of machine translation models is mentioned in the email.
JEE Owner of Communication
An acronym, likely for Jeffrey E. Epstein, stated in the legal disclaimer as the owner of the communication.

Organizations (4)

Name Type Context
Google
Mentioned as having acquired DeepMind for 500M and for using Latent Semantic Analysis models for machine translation.
DeepMind
Mentioned for creating a machine learning program that learned to play an Atari game, a feat that led to its acquisit...
NIPS
A workshop where Joscha introduced his ideas on "request-confirmation networks". NIPS stands for Neural Information P...
House Oversight Committee
Implied by the Bates number 'HOUSE_OVERSIGHT_026397' in the footer, indicating the document is part of a collection f...

Timeline (1 events)

last the December
Joscha introduced 'request-confirmation networks' at a NIPS workshop.
NIPS workshop

Relationships (2)

Joscha Communicated on behalf of or via infrastructure owned by JEE
Joscha sent an email that contains a legal disclaimer stating it is the 'property of JEE' and provides a 'JEE'-related contact email.
Google Corporate Acquisition DeepMind
Text states that Google gave DeepMind 500M, referring to the acquisition.

Key Quotes (3)

"A machine learning program that can learn how to play an Atari game without any human supervision or hand-crafted engineering (the feat that gave DeepMind 500M from Google) now just takes about 130 lines of Python code."
Source
HOUSE_OVERSIGHT_026397.jpg
Quote #1
"The information contained in this communication is confidential... It is the property of JEE"
Source
HOUSE_OVERSIGHT_026397.jpg
Quote #2
"If you have received this communication in error, please notify us immediately by return e-mail or by e-mail to jeevacation@gmail.com..."
Source
HOUSE_OVERSIGHT_026397.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (2,705 characters)

networks
A machine learning program that can learn how to play an Atari game without any human supervision or hand-crafted engineering (the feat that gave DeepMind 500M from Google) now just takes about 130 lines of Python code.
These models do not have interesting motivational systems, and a relatively simple architecture. They currently seem to mimic some of the stuff that goes on in the first few layers of the cortex. They learn object features, visual styles, lighting and rotation in 3d, and simple action policies. Almost everything else is missing. But there is a lot of enthusiasm that the field might be on the right track, and that we can learn motor simulations and intuitive physics soon. (The majority of the people in AI do not work on this, however. They try to improve the performance for the current benchmarks.)
Noam's criticism of machine translation mostly applies to the Latent Semantic Analysis models that Google and others have been using for many years. These models map linguistic symbols to concepts, and relate concepts to each other, but they do not relate the concepts to "proper" mental representations of what objects and processes look like and how they interact. Concepts are probably one of the top layers of the learning hierarchy, i.e. they are acquired *after* we learn to simulate a mental world, not before. Classical linguists ignored the simulation of a mental world entirely.
It seems miraculous that purely conceptual machine translation works at all, but that is because concepts are shared between speakers, so the structure of the conceptual space can be inferred from the statistics of language use. But the statistics of language use have too little information to infer what objects look like and how they interact.
My own original ideas concern a few parts of the emerging understanding of what the brain does. The "request-confirmation networks" that I have introduced at a NIPS workshop in last the December are an attempt at modeling how the higher layers might self-organize into cognitive programs.
Cheers!
Joscha
--
please note
The information contained in this communication is
confidential, may be attorney-client privileged, may
constitute inside information, and is intended only for
the use of the addressee. It is the property of
JEE
Unauthorized use, disclosure or copying of this
communication or any part thereof is strictly prohibited
and may be unlawful. If you have received this
communication in error, please notify us immediately by
return e-mail or by e-mail to jeevacation@gmail.com, and
destroy this communication and all copies thereof,
including all attachments. copyright -all rights reserved
HOUSE_OVERSIGHT_026397

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document