HOUSE_OVERSIGHT_013031.jpg

1.92 MB

Extraction Summary

4
People
2
Organizations
0
Locations
0
Events
0
Relationships
2
Quotes

Document Information

Type: Scientific paper / academic book excerpt
File Size: 1.92 MB
Summary

This document is page 115 of a technical scientific text describing the 'CogPrime' Artificial Intelligence architecture. It details how PLN (Probabilistic Logic Networks) inference utilizes declarative, episodic, and procedural knowledge to estimate probabilities within a cognitive schematic. The text uses hypothetical examples involving a 'virtual dog' and characters named Bob, Jim, Jack, and Jill to illustrate logical implications (C ∧ P → G). The document bears a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was part of a document production for a Congressional investigation, likely related to Jeffrey Epstein's funding of or interest in AI research.

People (4)

Name Role Context
Bob Hypothetical Example
Used in an AI logic example regarding a 'virtual dog' asking for food.
Jim Hypothetical Example
Used in an AI logic example regarding feature similarity.
Jack Hypothetical Example
Used in an AI logic example regarding concept creation (children playing).
Jill Hypothetical Example
Used in an AI logic example regarding concept creation (children playing).

Organizations (2)

Name Type Context
CogPrime
The AI architecture/design discussed in the text.
House Oversight Committee
Implied by the footer stamp 'HOUSE_OVERSIGHT'.

Key Quotes (2)

"PLN inference, acting on declarative knowledge, is used for estimating the probability of the implication in the cognitive schematic, given fixed C, P and G."
Source
HOUSE_OVERSIGHT_013031.jpg
Quote #1
"That is, it requires the sort of cognitive synergy built into the CogPrime design."
Source
HOUSE_OVERSIGHT_013031.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,020 characters)

6.6 Analysis and Synthesis Processes in CogPrime
115
Where analysis is concerned:
• PLN inference, acting on declarative knowledge, is used for estimating the probability of the implication in the cognitive schematic, given fixed C, P and G. Episodic knowledge is also used in this regard, via enabling estimation of the probability via simple similarity matching against past experience. Simulation is also used: multiple simulations may be run, and statistics may be captured therefrom.
– Example: To estimate the degree to which asking Bob for food (the procedure P is “asking for food”, the context C is “being with Bob”) will achieve the goal G of getting food, the virtual dog may study its memory to see what happened on previous occasions where it or other dogs asked Bob for food or other things, and then integrate the evidence from these occasions.
• Procedural knowledge, mapped into declarative knowledge and then acted on by PLN inference, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P₁ → G is known for some P₁ related to P.
– Example: knowledge of the internal similarity between the procedure of asking for food and the procedure of asking for toys, allows the virtual dog to reason that if asking Bob for toys has been successful, maybe asking Bob for food will be successful too.
• Inference, acting on declarative or sensory knowledge, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C₁ ∧ P → G is known for some C₁ related to C.
– Example: if Bob and Jim have a lot of features in common, and Bob often responds positively when asked for food, then maybe Jim will too.
• Inference can be used similarly for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P → G₁ is known for some G₁ related to G. Concept creation can be useful indirectly in calculating these probability estimates, via providing new concepts that can be used to make useful inference trails more compact and hence easier to construct.
– Example: The dog may reason that because Jack likes to play, and Jack and Jill are both children, maybe Jill likes to play too. It can carry out this reasoning only if its concept creation process has invented the concept of “child” via analysis of observed data.
In these examples we have focused on cases where two terms in the cognitive schematic are fixed and the third must be filled in; but just as often, the situation is that only one of the terms is fixed. For instance, if we fix G, sometimes the best approach will be to collectively learn C and P. This requires either a procedure learning method that works interactively with a declarative-knowledge-focused concept learning or reasoning method; or a declarative learning method that works interactively with a procedure learning method. That is, it requires the sort of cognitive synergy built into the CogPrime design.
HOUSE_OVERSIGHT_013031

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document