|
AIXI is a mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000 and the results below are proved in Hutter's 2005 book ''Universal Artificial Intelligence''. AIXI is a reinforcement learning agent; it maximizes the expected total rewards received from the environment. Intuitively, it simultaneously considers every computable hypothesis. In each time step, it looks at every possible program and evaluates how many rewards that program generates depending on the next action taken. The promised rewards are then weighted by the subjective belief that this program constitutes the true environment. This belief is computed from the length of the program: longer programs are considered less likely, in line with Occam's razor. AIXI then selects the action that has the highest expected total reward in the weighted sum of all these programs. == Definition == The AIXI agent interacts sequentially with some (stochastic and unknown to AIXI) environment . In step ''t'', the agent outputs an action and the environment responds with an observation and a reward distributed according to the conditional probability . Then this cycle repeats for ''t + 1''. The agent tries to maximize cumulative future reward for a fixed lifetime ''m''. Given a current time ''t'' and history , the action AIXI outputs is defined as〔(Universal Artificial Intelligence )〕 : where ''U'' denotes a monotone universal Turing machine, and ''q'' ranges over all programs on the universal machine ''U''. The parameters to AIXI are the universal Turing machine and the agent's lifetime ''m''. The latter dependence can be removed by the use of discounting. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「AIXI」の詳細全文を読む スポンサード リンク
|