翻訳と辞書
Words near each other
・ Adaina costarica
・ Adaina desolata
・ Adaina everdinae
・ Adaina excreta
・ Adaina fuscahodias
・ Adaina gentilis
・ Adaina hodias
・ Adaina invida
・ Adaina ipomoeae
・ Adaina microdactoides
・ Adaina microdactyla
・ Adabe language
・ Adabel Guerrero
・ Adabiat Intersection
・ Adabis
AdaBoost
・ Adabor Thana
・ Adabraka
・ Adabraka (Kumasi)
・ Adabroc
・ ADAC
・ ADAC (disambiguation)
・ ADAC Formel Masters
・ ADAC Formula 4
・ ADAC GT Masters
・ ADAC Laboratories
・ ADAC Motorwelt
・ ADAC Opel Rallye Cup
・ ADAC Procar Series
・ AdaCamp


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

AdaBoost : ウィキペディア英語版
AdaBoost
AdaBoost, short for "Adaptive Boosting", is a machine learning meta-algorithm formulated by Yoav Freund and Robert Schapire who won the Gödel Prize in 2003 for their work. It can be used in conjunction with many other types of learning algorithms to improve their performance. The output of the other learning algorithms ('weak learners') is combined into a weighted sum that represents the final output of the boosted classifier. AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers. AdaBoost is sensitive to noisy data and outliers. In some problems, however, it can be less susceptible to the overfitting problem than other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing (i.e., their error rate is smaller than 0.5 for binary classification), the final model can be proven to converge to a strong learner
While every learning algorithm will tend to suit some problem types better than others, and will typically have many different parameters and configurations to be adjusted before achieving optimal performance on a dataset, AdaBoost (with decision trees as the weak learners) is often referred to as the best out-of-the-box classifier. When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree growing algorithm such that later trees tend to focus on harder-to-classify examples.
==Overview==
Problems in machine learning often suffer from the curse of dimensionality — each sample may consist of a huge number of potential features (for instance, there can be 162,336 Haar features, as used by the Viola–Jones object detection framework, in a 24×24 pixel image window), and evaluating every feature can reduce not only the speed of classifier training and execution, but in fact reduce predictive power, per the ''Hughes Effect''. Unlike neural networks and SVMs, the AdaBoost training process selects only those features known to improve the predictive power of the model, reducing dimensionality and potentially improving execution time as irrelevant features do not need to be computed.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「AdaBoost」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.