Английская Википедия:Empowerment (artificial intelligence)
Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment.[1][2] An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation.[3]
The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables (<math>S: s \in \mathcal{S}, A: a \in \mathcal{A}</math>) and time (<math>t</math>). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network.
Definition
Empowerment (<math>\mathfrak{E}</math>) is defined as the channel capacity (<math>C</math>) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors.[3]
<math display="block>
\mathfrak{E} := C(A_t \longrightarrow S_{t+1}) \equiv \max_{p(a_t)} I(A_t;S_{t+1})
</math>
In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment.[4]
<math display="block>
\mathfrak{E}(A^n_t \longrightarrow S_{t+n}) = \max_{p(a_t,...,a_{t+n-1})} I(A_t,...,A_{t+n-1};S_{t+n})
</math>
The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits.
Contextual Empowerment
In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'.[4] <math>C</math> is a random variable describing the context (e.g. state).
<math display="block>
\mathfrak{E}(A^n_t \longrightarrow S_{t+n}{\mid}C) = \sum_{c{\in}C} p(c) \mathfrak{E}(A^n_t \longrightarrow S_{t+n}{\mid}C=c)
</math>
Application
Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent.[4] Empowerment has been applied in studies of collective behaviour[5] and in continuous domains.[6][7] As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control.[8] Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games,[9] and in the control of underwater vehicles.[10]
References
- ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокklyubin2005a
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокklyubin2005b
не указан текст - ↑ 3,0 3,1 Ошибка цитирования Неверный тег
<ref>
; для сносокsalge2014
не указан текст - ↑ 4,0 4,1 4,2 Ошибка цитирования Неверный тег
<ref>
; для сносокklyubin2008
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокcapdepuy2007
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокjung2011
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокsalge2013
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокkarl2017
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокrezende2015
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокvolpi2016
не указан текст