翻訳と辞書
Words near each other
・ Word usage
・ Word wall
・ Word Wars
・ Word Ways
・ Word Works
・ Word Worm
・ Word Writer 128
・ Word – University of Aberdeen writers festival
・ Word, Sound and Power
・ Word-addressable
・ WORD-FM
・ Word-of-mouth marketing
・ Word-sense disambiguation
・ Word-sense induction
・ Word...Life
Word2vec
・ Wordaholics
・ WordAlive Publishers
・ WordAlone
・ Wordament
・ WordBASIC
・ Wordburglar
・ Wordclock Records
・ Worden
・ Worden (ghost town), Wisconsin
・ Worden Day
・ Worden Field
・ Worden Park
・ Worden, Illinois
・ Worden, Kansas


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Word2vec : ウィキペディア英語版
Word2vec

Word2vec is a group of related models that are used to produce so-called word embeddings. These models are shallow, two-layer neural networks, that are trained to reconstruct linguistic contexts of words: the network is shown a word, and must guess at which words occurred in adjacent positions in an input text. The order of the remaining words is not important (bag-of-words assumption).
After training, word2vec models can be used to map each word to a vector of typically several hundred elements, which represent that word's relation to other words. This vector is the neural network's hidden layer.
Word2vec relies on either skip-grams or continuous bag of words (CBOW) to create neural word embeddings. It was created by a team of researchers led by Tomas Mikolov at Google. The algorithm has been subsequently analysed and explained by other researchers.
==Skip grams and CBOW==

Skip grams are word windows from which one word is excluded, an n-gram with gaps. With skip-grams, given a window size of words around a word , word2vec predicts contextual words ; i.e. in the notation of probability p(c|w). Conversely, CBOW predicts the current word, given the context in the window, p(w|c).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Word2vec」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.