|
High-dimensional data, meaning data that requires more than two or three dimensions to represent, can be difficult to interpret. One approach to simplification is to assume that the data of interest lie on an embedded non-linear manifold within the higher-dimensional space. If the manifold is of low enough dimension, the data can be visualised in the low-dimensional space. Below is a summary of some of the important algorithms from the history of manifold learning and nonlinear dimensionality reduction (NLDR).〔John A. Lee, Michel Verleysen, Nonlinear Dimensionality Reduction, Springer, 2007.〕 Many of these non-linear dimensionality reduction methods are related to the linear methods listed below. Non-linear methods can be broadly classified into two groups: those that provide a mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa), and those that just give a visualisation. In the context of machine learning, mapping methods may be viewed as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Typically those that just give a visualisation are based on proximity data – that is, distance measurements. ==Linear methods== * Independent component analysis (ICA). * Principal component analysis (PCA) (also called Karhunen–Loève transform — KLT). * Singular value decomposition (SVD). * Factor analysis. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Nonlinear dimensionality reduction」の詳細全文を読む スポンサード リンク
|