This document appears to be page 313 from a technical book or paper titled 'Measuring Incremental Progress Toward Human-Level AGI' (Artificial General Intelligence). It outlines specific criteria and hypothetical testing scenarios for AI development, focusing on Emotion, Modeling Self and Other, and Social Interaction. The text uses names like Hugo, Cassio, Ben, and Itamar in these scenarios, which likely correspond to real-world AI researchers (e.g., Ben Goertzel). The page bears a 'HOUSE_OVERSIGHT_013229' stamp, indicating it is part of a larger government document collection.
| Name | Role | Context |
|---|---|---|
| Hugo | Character in AI scenario |
Used in an example regarding subgoal creation and pleasing a subject.
|
| Cassio | Character in AI scenario |
Used in examples involving emotion, theory of mind, and other-awareness. (Likely a reference to AI researcher Cassio ...
|
| Ben | Character in AI scenario |
Used in examples involving emotion, theory of mind, and interaction. (Likely a reference to AI researcher Ben Goertzel).
|
| Itamar | Character in AI scenario |
Used in an example regarding empathy. (Likely a reference to AI researcher Itamar Arel).
|
"Given the goal of pleasing Hugo, can the robot learn that telling Hugo facts it has learned but not told Hugo before, will tend to make Hugo happy?"Source
"The robot needs to set these experiences aside, and not let them impair its self-model significantly; it needs to keep on thinking it’s a good robot"Source
Complete text extracted from the document (2,699 characters)
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document