|
Multilinear subspace learning (MSL) aims to learn a specific small part of a large space of multidimensional objects having a particular desired property. 〔 M. A. O. Vasilescu, D. Terzopoulos (2002) ("Multilinear Analysis of Image Ensembles: TensorFaces" ), Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002〕 〔 M. A. O. Vasilescu,(2002) ("Human Motion Signatures: Analysis, Synthesis, Recognition" ), "Proceedings of International Conference on Pattern Recognition (ICPR 2002), Vol. 3, Quebec City, Canada, Aug, 2002, 456-460."〕 〔 M. A. O. Vasilescu, D. Terzopoulos (2003) ("Multilinear Subspace Analysis of Image Ensembles" ), "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"〕 It is a dimensionality reduction approach for finding a low-dimensional representation with certain preferred characteristics of high-dimensional tensor data through direct mapping, without going through vectorization.〔X. He, D. Cai, P. Niyogi, (Tensor subspace analysis ), in: Advances in Neural Information Processing Systemsc 18 (NIPS), 2005.〕 The term tensor in MSL refers to multidimensional arrays. Examples of tensor data include images (2D/3D), video sequences (3D/4D), and hyperspectral cubes (3D/4D). The mapping from a high-dimensional tensor space to a low-dimensional tensor space or vector space is named as multilinear projection.〔 MSL methods are higher-order generalizations of linear subspace learning methods such as principal component analysis (PCA), linear discriminant analysis (LDA) and canonical correlation analysis (CCA). In the literature, MSL is also referred to as tensor subspace learning or tensor subspace analysis.〔 Research on MSL has progressed from heuristic exploration in 2000s (decade) to systematic investigation in 2010s. ==Background== With the advances in data acquisition and storage technology, big data (or massive data sets) are being generated on a daily basis in a wide range of emerging applications. Most of these big data are multidimensional. Moreover, they are usually very-high-dimensional, with a large amount of redundancy, and only occupying a part of the input space. Therefore, dimensionality reduction is frequently employed to map high-dimensional data to a low-dimensional space while retaining as much information as possible. Linear subspace learning algorithms are traditional dimensionality reduction techniques that represent input data as vectors and solve for an optimal linear mapping to a lower-dimensional space. Unfortunately, they often become inadequate when dealing with massive multidimensional data. They result in very-high-dimensional vectors, lead to the estimation of a large number of parameters, and also break the natural structure and correlation in the original data.〔〔〔H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "(MPCA: Multilinear principal component analysis of tensor objects )," IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 18–39, January 2008.〕〔S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.-J. Zhang, "(Discriminant analysis with tensor representation )," in Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. I, June 2005, pp. 526–532.〕 MSL is closely related to tensor decompositions.〔T. G. Kolda, B. W. Bader, (Tensor decompositions and applications ), SIAM Review 51 (3) (2009) 455–500.〕 They both employ multilinear algebra tools. The difference is that tensor decomposition focuses on factor analysis, while MSL focuses on dimensionality reduction. MSL belongs to tensor-based computation〔(【引用サイトリンク】date=May 2009 )〕 and it can be seen as a tensor-level computational thinking of machine learning. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Multilinear subspace learning」の詳細全文を読む スポンサード リンク
|