翻訳と辞書
Words near each other
・ Brown-Young BY-1
・ Brownanthus
・ Brownanthus namibensis
・ Brownanthus pubescens
・ Brownback
・ Brownback salamander
・ Brownback trevally
・ Brownbackistan
・ Brownbagging
・ Brownband numbfish
・ Brownbanded bamboo shark
・ Brownber
・ Brownberry
・ Brownberry (bakery)
・ Brownbill
BrownBoost
・ Brownbranch, Missouri
・ Brownbrokers
・ Brownbul
・ Browncoat
・ Browncroft Historic District
・ Browndale, Kansas
・ Browndell, Texas
・ Browndown Battery
・ Browne
・ Browne baronets
・ Browne Bushell
・ Browne Falls
・ Browne House
・ Browne Island


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

BrownBoost : ウィキペディア英語版
BrownBoost
BrownBoost is a boosting algorithm that may be robust to noisy datasets. BrownBoost is an adaptive version of the boost by majority algorithm. As is true for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods. BrownBoost was introduced by Yoav Freund in 2001.〔Yoav Freund. An adaptive version of the boost by majority algorithm. Machine Learning, 43(3):293--318, June 2001.〕
==Motivation==

AdaBoost performs well on a variety of datasets; however, it can be shown that AdaBoost does not perform well on noisy data sets.〔Dietterich, T. G., (2000). An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40 (2) 139-158.〕 This is a result of AdaBoost's focus on examples that are repeatedly misclassified. In contrast, BrownBoost effectively "gives up" on examples that are repeatedly misclassified. The core assumption of BrownBoost is that noisy examples will be repeatedly mislabeled by the weak hypotheses and non-noisy examples will be correctly labeled frequently enough to not be "given up on." Thus only noisy examples will be "given up on," whereas non-noisy examples will contribute to the final classifier. In turn, if the final classifier is learned from the non-noisy examples, the generalization error of the final classifier may be much better than if learned from noisy and non-noisy examples.
The user of the algorithm can set the amount of error to be tolerated in the training set. Thus, if the training set is noisy (say 10% of all examples are assumed to be mislabeled), the booster can be told to accept a 10% error rate. Since the noisy examples may be ignored, only the true examples will contribute to the learning process.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「BrownBoost」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.