HOUSE_OVERSIGHT_013068.jpg

1.99 MB

Extraction Summary

2
People
1
Organizations
0
Locations
0
Events
1
Relationships
2
Quotes

Document Information

Type: Scientific/academic book page (evidence document)
File Size: 1.99 MB
Summary

This document is page 152 from a technical or academic book titled 'Cognitive Synergy'. The text discusses Artificial Intelligence concepts, specifically 'PLN' (Probabilistic Logic Networks) and cognitive schematics, using hypothetical examples of a 'virtual dog' interacting with humans named 'Bob' and 'Jim'. The document bears a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was part of a document production for a congressional investigation, likely related to Epstein's funding of or interest in science and AI research.

People (2)

Name Role Context
Bob Hypothetical Example Subject
Used in an AI logic example regarding a virtual dog asking for food or toys.
Jim Hypothetical Example Subject
Used in an AI logic example to demonstrate inference based on similarity to 'Bob'.

Organizations (1)

Name Type Context
House Oversight Committee
Implied by the Bates stamp 'HOUSE_OVERSIGHT_013068' at the bottom of the page.

Relationships (1)

Bob Hypothetical Similarity Jim
Used in a logic example: 'if Bob and Jim have a lot of features in common...'

Key Quotes (2)

"PLN-based goal refinement is used to create new subgoals G to sit on the right hand side of instances of the cognitive schematic."
Source
HOUSE_OVERSIGHT_013068.jpg
Quote #1
"Example: if Bob and Jim have a lot of features in common, and Bob often responds positively when asked for food, then maybe Jim will too."
Source
HOUSE_OVERSIGHT_013068.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,101 characters)

152 8 Cognitive Synergy
- Example: A virtual dog wants to achieve the goal G of getting food, and it knows that the procedure P of begging has been successful at this before, so it seeks a context C where begging can be expected to get it food. Probably this will be a context involving a friendly person.
• PLN-based goal refinement is used to create new subgoals G to sit on the right hand side of instances of the cognitive schematic.
- Example: Given that a virtual dog has a goal of finding food, it may learn a subgoal of following other dogs, due to observing that other dogs are often heading toward their food.
• Concept formation heuristics are used for choosing G and for fueling goal refinement, but especially for choosing C (via providing new candidates for C). They are also used for choosing P, via a process called "predicate schematization" that turns logical predicates (declarative knowledge) into procedures.
- Example: At first a virtual dog may have a hard time predicting which other dogs are going to be mean to it. But it may eventually observe common features among a number of mean dogs, and thus form its own concept of "pit bull," without anyone ever teaching it this concept explicitly.
Where analysis is concerned:
• PLN inference, acting on declarative knowledge, is used for estimating the probability of the implication in the cognitive schematic, given fixed C, P and G. Episodic knowledge is also used this regard, via enabling estimation of the probability via simple similarity matching against past experience. Simulation is also used: multiple simulations may be run, and statistics may be captured therefrom.
- Example: To estimate the degree to which asking Bob for food (the procedure P is "asking for food", the context C is "being with Bob") will achieve the goal G of getting food, the virtual dog may study its memory to see what happened on previous occasions where it or other dogs asked Bob for food or other things, and then integrate the evidence from these occasions.
• Procedural knowledge, mapped into declarative knowledge and then acted on by PLN inference, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P₁ → G is known for some P₁ related to P.
- Example: knowledge of the internal similarity between the procedure of asking for food and the procedure of asking for toys, allows the virtual dog to reason that if asking Bob for toys has been successful, maybe asking Bob for food will be successful too.
• Inference, acting on declarative or sensory knowledge, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C₁ ∧ P → G is known for some C₁ related to C.
- Example: if Bob and Jim have a lot of features in common, and Bob often responds positively when asked for food, then maybe Jim will too.
• Inference can be used similarly for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P → G₁ is known for some G₁ related to G. Concept
HOUSE_OVERSIGHT_013068

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document