HOUSE_OVERSIGHT_013054.jpg

2.03 MB

Extraction Summary

2
People
0
Organizations
0
Locations
0
Events
0
Relationships
3
Quotes

Document Information

Type: Academic/scientific paper page
File Size: 2.03 MB
Summary

This document page discusses a formal definition for "Pragmatic General Intelligence" within the context of intelligent agents. It introduces mathematical definitions incorporating environments, goals, and distributions to quantify an agent's expected performance. The text also compares this concept to universal intelligence and references the algorithmic agent AIXI.

People (2)

Name Role Context
Legg
Hutter

Key Quotes (3)

"intelligence as “the ability to achieve complex goals in complex environments.”"
Source
HOUSE_OVERSIGHT_013054.jpg
Quote #1
"The pragmatic general intelligence of an agent π... is its expected performance with respect to goals drawn from γ in environments drawn from ν"
Source
HOUSE_OVERSIGHT_013054.jpg
Quote #2
"If ν is taken to be the universal distribution, and γ is defined to weight goals according to the universal distribution, then pragmatic general intelligence reduces to universal intelligence."
Source
HOUSE_OVERSIGHT_013054.jpg
Quote #3

Full Extracted Text

Complete text extracted from the document (3,418 characters)

138 7 A Formal Model of Intelligent Agents
7.3.3 Pragmatic General Intelligence
The above concept of biased universal intelligence is perfectly adequate for many purposes, but
it is also interesting to explicitly introduce the notion of a goal into the calculation. This allows
us to formally capture the notion presented in [Goe93a] of intelligence as “the ability to achieve
complex goals in complex environments.”
If the agent is acting in environment μ, and is provided with gs corresponding to g at the
start and the end of the time-interval T = {i ∈ (s, ..., t)}, then the expected goal-achievement
of the agent, relative to g, during the interval is the expectation
where the expectation is taken over all interaction sequences Ig,s,i drawn according to μ. We
then propose
Definition 5 The pragmatic general intelligence of an agent π, relative to the distribution
ν over environments and the distribution γ over goals, is its expected performance with respect
to goals drawn from γ in environments drawn from ν, over the time-scales natural to the goals;
that is,
(in those cases where this sum is convergent).
This definition formally captures the notion that “intelligence is achieving complex goals in
complex environments,” where “complexity” is gauged by the assumed measures ν and γ.
If ν is taken to be the universal distribution, and γ is defined to weight goals according to
the universal distribution, then pragmatic general intelligence reduces to universal intelligence.
Furthermore, it is clear that a universal algorithmic agent like AIXI [Hut05] would also
have a high pragmatic general intelligence, under fairly broad conditions. As the interaction
history grows longer, the pragmatic general intelligence of AIXI would approach the theoretical
maximum; as AIXI would implicitly infer the relevant distributions via experience. However,
if significant reward discounting is involved, so that near-term rewards are weighted much
higher than long-term rewards, then AIXI might compare very unfavorably in pragmatic general
intelligence, to other agents designed with prior knowledge of ν, γ and τ in mind.
The most interesting case to consider is where ν and γ are taken to embody some particular
bias in a real-world space of environments and goals, and this bias is appropriately reflected
in the internal structure of an intelligent agent. Note that an agent needs not lack universal
intelligence in order to possess pragmatic general intelligence with respect to some non-universal
distribution over goals and environments. However, in general, given limited resources, there
may be a tradeoff between universal intelligence and pragmatic intelligence. Which leads to the
next point: how to encompass resource limitations into the definition.
One might argue that the definition of Pragmatic General Intelligence is already encompassed
by Legg and Hutter’s definition because one may bias the distribution of environments within
the latter by considering different Turing machines underlying the Kolmogorov complexity.
However this is not a general equivalence because the Solomonoff-Levin measure intrinsically
HOUSE_OVERSIGHT_013054
[Formula 1 Transcription: V_μ,g,T^π ≡ E(Σ_{i=s}^{t} r_g(I_g,s,i)) ]
[Formula 2 Transcription: Π(π) ≡ Σ_{μ∈E, g∈G, T} ν(μ)γ(g, μ)V_μ,g,T^π ]

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document