HOUSE_OVERSIGHT_013072.jpg

2.35 MB

Extraction Summary

1
People
3
Organizations
1
Locations
0
Events
0
Relationships
2
Quotes

Document Information

Type: Scientific/academic publication (evidence exhibit)
File Size: 2.35 MB
Summary

This document is page 156 of a technical academic text regarding Artificial Intelligence, specifically 'Cognitive Synergy' and 'PLN' (Probabilistic Logic Networks). The text details technical logic processes, backward chaining, and the interaction between different AI components like MOSES and CogPrime using a hypothetical example of a robot in a preschool. The document is marked with a 'HOUSE_OVERSIGHT' Bates stamp, indicating it was gathered as evidence in a congressional investigation, likely related to Epstein's funding of scientific research or AI projects.

People (1)

Name Role Context
Bob Hypothetical Subject
Used as a hypothetical example of a 'new playmate' in a robotic logic scenario.

Organizations (3)

Name Type Context
House Oversight Committee
Document bears the stamp 'HOUSE_OVERSIGHT'.
CogPrime
Mentioned as the system containing the simulation engine and cognitive processes.
MOSES
Mentioned as a cognitive process invoked for procedure learning (Meta-Optimizing Semantic Evolutionary Search).

Locations (1)

Location Context
Hypothetical setting for the robot example.

Key Quotes (2)

"PLN seeks to achieve efficient inference control via integration with other cognitive processes."
Source
HOUSE_OVERSIGHT_013072.jpg
Quote #1
"The combinatorial explosion of inference control is combatted by the capability to defer to other cognitive processes"
Source
HOUSE_OVERSIGHT_013072.jpg
Quote #2

Full Extracted Text

Complete text extracted from the document (3,809 characters)

156 8 Cognitive Synergy
here we will eschew most details and focus mainly on pointing out how PLN seeks to achieve efficient inference control via integration with other cognitive processes.
As a logic, PLN is broadly integrative: it combines certain term logic rules with more standard predicate logic rules, and utilizes both fuzzy truth values and a variant of imprecise probabilities called indefinite probabilities. PLN mathematics tells how these uncertain truth values propagate through its logic rules, so that uncertain premises give rise to conclusions with reasonably accurately estimated uncertainty values. This careful management of uncertainty is critical for the application of logical inference in the robotics context, where most knowledge is abstracted from experience and is hence highly uncertain.
PLN can be used in either forward or backward chaining mode; and in the language introduced above, it can be used for either analysis or synthesis. As an example, we will consider backward chaining analysis, exemplified by the problem of a robot preschool-student trying to determine whether a new playmate "Bob" is likely to be a regular visitor to its preschool or not (evaluating the truth value of the implication Bob -> regular_visitor). The basic backward chaining process for PLN analysis looks like:
1. Given an implication L = A -> B whose truth value must be estimated (for instance L = C&P -> G as discussed above), create a list (A1,..., An) of (inference rule, stored knowledge) pairs that might be used to produce L
2. Using analogical reasoning to prior inferences, assign each Ai a probability of success
• If some of the Ai are estimated to have reasonable probability of success at generating reasonably confident estimates of L's truth value, then invoke Step 1 with Ai in place of L (at this point the inference process becomes recursive)
• If none of the Ai looks sufficiently likely to succeed, then inference has "gotten stuck" and another cognitive process should be invoked, e.g.
– Concept creation may be used to infer new concepts related to A and B, and then Step 1 may be revisited, in the hope of finding a new, more promising Ai involving one of the new concepts
– MOSES may be invoked with one of several special goals, e.g. the goal of finding a procedure P so that P(X) predicts whether X -> B. If MOSES finds such a procedure P then this can be converted to declarative knowledge understandable by PLN and Step 1 may be revisited....
– Simulations may be run in CogPrime's internal simulation engine, so as to observe the truth value of A -> B in the simulations; and then Step 1 may be revisited....
The combinatorial explosion of inference control is combatted by the capability to defer to other cognitive processes when the inference control procedure is unable to make a sufficiently confident choice of which inference steps to take next. Note that just as MOSES may rely on PLN to model its evolving populations of procedures, PLN may rely on MOSES to create complex knowledge about the terms in its logical implications. This is just one example of the multiple ways in which the different cognitive processes in CogPrime interact synergetically; a more thorough treatment of these interactions is given in Chapter 49.
In the "new playmate" example, the interesting case is where the robot initially seems not to know enough about Bob to make a solid inferential judgment (so that none of the Ai seem particularly promising). For instance, it might carry out a number of possible inferences and not come to any reasonably confident conclusion, so that the reason none of the Ai seem promising is that all the decent-looking ones have tried already. So it might then recourse to MOSES, simulation or concept creation.
HOUSE_OVERSIGHT_013072

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document