翻訳と辞書
Words near each other
・ Co-receptor
・ Co-Redemptrix
・ Co-regulation
・ Co-respondent
・ Co-Ro Food
・ Co-rumination
・ Co-Rux-Te-Chod-Ish
・ Co-Sign (song)
・ Co-signing
・ Co-simulation
・ Co-sleeping
・ Co-stardom network
・ Co-stimulation
・ Co-teaching
・ Co-tenidone
Co-training
・ Co.lab Xchange
・ CO1
・ CO2 (disambiguation)
・ CO2 (opera)
・ CO2 Cashmere
・ CO2 content
・ CO2 dragster
・ CO2 is Green
・ CO2 rocket
・ CO3
・ Co3
・ COA
・ Coa
・ Coa de jima


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Co-training : ウィキペディア英語版
Co-training
Co-training is a machine learning algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum and Tom Mitchell in 1998.
==Algorithm design==
Co-training is a semi-supervised learning technique that requires two ''views'' of the data. It assumes that each example is described using two different feature sets that provide different, complementary information about the instance. Ideally, the two views are conditionally independent (i.e., the two feature sets of each instance are conditionally independent given the class) and each view is sufficient (i.e., the class of an instance can be accurately predicted from each view alone). Co-training first learns a separate classifier for each view using any labeled examples. The most confident predictions of each classifier on the unlabeled data are then used to iteratively construct additional labeled training data.〔Blum, A., Mitchell, T. (Combining labeled and unlabeled data with co-training ). ''COLT: Proceedings of the Workshop on Computational Learning Theory'', Morgan Kaufmann, 1998, p. 92-100.〕
The original co-training paper described experiments using co-training to classify web pages into "academic course home page" or not; the classifier correctly categorized 95% of 788 web pages with only 12 labeled web pages as examples.〔 The paper has been cited over 1000 times, and received the 10 years Best Paper Award at the 25th International Conference on Machine Learning (ICML 2008), a renowned computer science conference.
Krogel and Scheffer showed in 2004 that co-training is only beneficial if the data sets used in classification are independent. Co-training can only work if one of the classifiers correctly labels a piece of data that the other classifier previously misclassified. If both classifiers agree on all the unlabeled data, i.e. they are not independent, labeling the data does not create new information. When they applied co-training to problems in functional genomics, co-training worsened the results as the dependence of the classifiers was greater than 60%.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Co-training」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.