|
Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a ''margin'' between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function : which classifies samples from a space into one of two classes, labelled 1 and -1, respectively. LPBoost is an algorithm to ''learn'' such a classification function given a set of training examples with known class labels. LPBoost is a machine learning technique and especially suited for applications of joint classification and feature selection in structured domains. == LPBoost overview == As in all boosting classifiers, the final classification function is of the form : where are non-negative weightings for ''weak'' classifiers . Each individual weak classifier may be just a little bit better than random, but the resulting linear combination of many weak classifiers can perform very well. LPBoost constructs by starting with an empty set of weak classifiers. Iteratively, a single weak classifier to add to the set of considered weak classifiers is selected, added and all the weights for the current set of weak classifiers are adjusted. This is repeated until no weak classifiers to add remain. The property that all classifier weights are adjusted in each iteration is known as ''totally-corrective'' property. Early boosting methods, such as AdaBoost do not have this property and converge slower. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「LPBoost」の詳細全文を読む スポンサード リンク
|