152 8 Cognitive Synergy
- Example: A virtual dog wants to achieve the goal G of getting food, and it knows that the procedure P of begging has been successful at this before, so it seeks a context C where begging can be expected to get it food. Probably this will be a context involving a friendly person.
• PLN-based goal refinement is used to create new subgoals G to sit on the right hand side of instances of the cognitive schematic.
- Example: Given that a virtual dog has a goal of finding food, it may learn a subgoal of following other dogs, due to observing that other dogs are often heading toward their food.
• Concept formation heuristics are used for choosing G and for fueling goal refinement, but especially for choosing C (via providing new candidates for C). They are also used for choosing P, via a process called "predicate schematization" that turns logical predicates (declarative knowledge) into procedures.
- Example: At first a virtual dog may have a hard time predicting which other dogs are going to be mean to it. But it may eventually observe common features among a number of mean dogs, and thus form its own concept of "pit bull," without anyone ever teaching it this concept explicitly.
Where analysis is concerned:
• PLN inference, acting on declarative knowledge, is used for estimating the probability of the implication in the cognitive schematic, given fixed C, P and G. Episodic knowledge is also used this regard, via enabling estimation of the probability via simple similarity matching against past experience. Simulation is also used: multiple simulations may be run, and statistics may be captured therefrom.
- Example: To estimate the degree to which asking Bob for food (the procedure P is "asking for food", the context C is "being with Bob") will achieve the goal G of getting food, the virtual dog may study its memory to see what happened on previous occasions where it or other dogs asked Bob for food or other things, and then integrate the evidence from these occasions.
• Procedural knowledge, mapped into declarative knowledge and then acted on by PLN inference, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P₁ → G is known for some P₁ related to P.
- Example: knowledge of the internal similarity between the procedure of asking for food and the procedure of asking for toys, allows the virtual dog to reason that if asking Bob for toys has been successful, maybe asking Bob for food will be successful too.
• Inference, acting on declarative or sensory knowledge, can be useful for estimating the probability of the implication C ∧ P → G, in cases where the probability of C₁ ∧ P → G is known for some C₁ related to C.
- Example: if Bob and Jim have a lot of features in common, and Bob often responds positively when asked for food, then maybe Jim will too.
• Inference can be used similarly for estimating the probability of the implication C ∧ P → G, in cases where the probability of C ∧ P → G₁ is known for some G₁ related to G. Concept
HOUSE_OVERSIGHT_013068
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document