翻訳と辞書
Words near each other
・ Self-hypnosis
・ Self-ignition
・ Self-image
・ Self-immolation
・ Self-immolation protests by Tibetans in China
・ Self-immolations in India
・ Self-incompatibility in plants
・ Self-incrimination
・ Self-indication assumption
・ Self-Indication Assumption Doomsday argument rebuttal
・ Self-induced abortion
・ Self-Inflicted
・ Self-Inflicted Aerial Nostalgia
・ Self-inflicted caesarean section
・ Self-inflicted wound
Self-information
・ Self-Injurious Behavior Inhibiting System
・ Self-injury Awareness Day
・ Self-insertion
・ Self-insurance
・ Self-interacting dark matter
・ Self-interest
・ Self-invested personal pension
・ Self-ionization of water
・ Self-justification
・ Self-knowledge (psychology)
・ Self-knowledge (Vedanta)
・ Self-learning
・ Self-leveling concrete
・ Self-levelling


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Self-information : ウィキペディア英語版
Self-information
In information theory, self-information or surprisal is a measure of the information content associated with an event in a probability space or with the value of a discrete random variable. It is expressed in a unit of information, for example bits, nats, or hartleys, depending on the base of the logarithm used in its calculation.
The term self-information is also sometimes used as a synonym of the related information-theoretic concept of entropy. These two meanings are not equivalent, and this article covers the first sense only.
==Definition==
By definition, the amount of self-information contained in a probabilistic event depends only on the probability of that event: the smaller its probability, the larger the self-information associated with receiving the information that the event indeed occurred.
Further, by definition, the measure of self-information is positive and additive. If an event C is the intersection of two independent events A and B, then the amount of information at the proclamation that C has happened, equals the sum of the amounts of information at proclamations of event A and event B respectively: \operatorname I(A \cap B)= \operatorname I(A) + \operatorname I(B).
Taking into account these properties, the self-information \operatorname I(\omega_n) associated with outcome \omega_n with probability \operatorname P(\omega_n) is:
:\operatorname I(\omega_n) = \log \left(\frac \right) = - \log(\operatorname P(\omega_n))
This definition complies with the above conditions. In the above definition, the base of the logarithm is not specified: if using base 2, the unit of \displaystyle I(\omega_n) is bits. When using the logarithm of base \displaystyle e, the unit will be the nat. For the log of base 10, the unit will be hartley.
As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 bits (probability 1/16), and the information content associated with getting a result other than the one specified would be 0.09 bits (probability 15/16). See below for detailed examples.

This measure has also been called surprisal, as it represents the "surprise" of seeing the outcome (a highly improbable outcome is very surprising). This term was coined by Myron Tribus in his 1961 book ''Thermostatics and Thermodynamics''.
The information entropy of a random event is the expected value of its self-information.
Self-information is an example of a proper scoring rule.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Self-information」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.