翻訳と辞書
Words near each other
・ Entropy
・ Entropy (1977 board game)
・ Entropy (1994 board game)
・ Entropy (album)
・ Entropy (anonymous data store)
・ Entropy (arrow of time)
・ Entropy (astrophysics)
・ Entropy (Buffy the Vampire Slayer)
・ Entropy (classical thermodynamics)
・ Entropy (comics)
・ Entropy (computing)
・ Entropy (disambiguation)
・ Entropy (energy dispersal)
・ Entropy (film)
・ Entropy (Hip Hop Reconstruction from the Ground Up)
Entropy (information theory)
・ Entropy (journal)
・ Entropy (order and disorder)
・ Entropy (statistical thermodynamics)
・ Entropy (video game)
・ Entropy / Send Them
・ Entropy and life
・ Entropy compression
・ Entropy encoding
・ Entropy estimation
・ Entropy exchange
・ Entropy in thermodynamics and information theory
・ Entropy maximization
・ Entropy monitoring
・ Entropy of activation


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Entropy (information theory) : ウィキペディア英語版
Entropy (information theory)

In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel. The channel modifies the message in some way. The receiver attempts to infer which message was sent. In this context, entropy (more specifically, Shannon entropy) is the expected value (average) of the information contained in each message. 'Messages' can be modeled by any flow of information.
In a more technical sense, there are reasons (explained below) to define information as the negative of the logarithm of the probability distribution. The probability distribution of the events, coupled with the information amount of every event, forms a random variable whose expected value is the average amount of information, or entropy, generated by this distribution. Units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit.
The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources. For instance, the entropy of a coin toss is 1 shannon, whereas of tosses it is shannons. Generally, you need bits to represent a variable that can take one of values if is a power of 2. If these values are equally probable, the entropy (in shannons) is equal to the number of bits. Equality between number of bits and shannons holds only while all outcomes are equally probable. If one of the events is more probable than others, observation of that event is less informative. Conversely, rarer events provide more information when observed. Since observation of less probable events occurs more rarely, the net effect is that the entropy (thought of as average information) received from non-uniformly distributed data is less than . Entropy is zero when one outcome is certain. Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The ''meaning'' of the events observed (the meaning of ''messages'') does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
Generally, ''entropy'' refers to disorder or uncertainty. Shannon entropy was introduced by Claude E. Shannon in his 1948 paper "A Mathematical Theory of Communication".〔 ((PDF ))〕 Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. Rényi entropy generalizes Shannon entropy.
==Introduction==
Entropy is a measure of ''unpredictability'' of ''information content''. To get an informal, intuitive understanding of the connection between these three English terms, consider the example of a poll on some political issue. Usually, such polls happen because the outcome of the poll isn't already known. In other words, the outcome of the poll is relatively ''unpredictable'', and actually performing the poll and learning the results gives some new ''information''; these are just different ways of saying that the ''entropy'' of the poll results is large. Now, consider the case that the same poll is performed a second time shortly after the first poll. Since the result of the first poll is already known, the outcome of the second poll can be predicted well and the results should not contain much new information; in this case the entropy of the second poll result is small relative to the first.
Now consider the example of a coin toss. When the coin is fair, that is, when the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be. This is because there is no way to predict the outcome of the coin toss ahead of time—the best we can do is predict that the coin will come up heads, and our prediction will be correct with probability 1/2. Such a coin toss has one bit of entropy since there are two possible outcomes that occur with equal probability, and learning the actual outcome contains one bit of information. Contrarily, a coin toss with a coin that has two heads and no tails has zero entropy since the coin will always come up heads, and the outcome can be predicted perfectly. Analogously, one binary bit has a \log_2 2=1 Shannon or bit entropy because it can have one of two values 1 and 0. Similarly, one trit contains \log_2 3 (about 1.58496) bits of information because it can have one of three values.
English text has fairly low entropy. In other words, it is fairly predictable. Even if we don't know exactly what is going to come next, we can be fairly certain that, for example, there will be many more e's than z's, that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy for each character of message.〔Schneier, B: ''Applied Cryptography'', Second edition, page 234. John Wiley and Sons.〕
The Chinese version of Wikipedia points out that Chinese characters have a much higher entropy than English. Each character of Chinese has about -log2(1/2500)=11.3 bits, almost three times higher than English. However, the discussion could be much more sophisticated than this simple calculation because in English the usage of words, not only characters, and redundancy factors could be considered.
If a compression scheme is lossless—that is, you can always recover the entire original message by decompressing—then a compressed message has the same quantity of information as the original, but communicated in fewer characters. That is, it has more information, or a higher entropy, per character. This means a compressed message has less redundancy. Roughly speaking, Shannon's source coding theorem says that a lossless compression scheme cannot compress messages, on average, to have ''more'' than one bit of information per bit of message, but that any value ''less'' than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains.
Shannon's theorem also implies that no lossless compression scheme can shorten ''all'' messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because we are usually only interested in compressing certain types of messages, for example English documents as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. However, the problem can still arise even in everyday use when applying a compression algorithm to already compressed data: for example, making a ZIP file of music that is already in the FLAC audio format is unlikely to achieve much extra saving in space.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Entropy (information theory)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.