User:Perimosocordiae/Manifold alignment
This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see Wikipedia:So you made a userspace draft. Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Manifold alignment is a class of machine learning algorithms that produce projections between sets of data, given that the original data sets lie on a common manifold. The concept was first introduced as such by Ham, Lee, and Saul in 2003[1], though the idea of correlating sets of vectors has existed since at least 1936[2].
Overview
Manifold alignment takes advantage of the assumption that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial domains.
Consider the case of aligning two data sets, and , with and .
Manifold alignment algorithms attempt to project both and into a new d-dimensional space such that the projections both minimize distance between corresponding points and preserve the local manifold structure of the original data. The projection functions are denoted and . Let represent the binary correspondence matrix between points and , and and represent pointwise similarities within data sets. The loss function for manifold alignment can then be written:
Note that the coefficient can be tuned to adjust how much importance preserving manifold structure should be given, versus minimizing corresponding points. Solving this optimization problem is equivalent to solving a generalized eigenvalue problem using the graph laplacian[3] of the joint matrix, G:
Variants
Several variants of the basic manifold alignment technique have been proposed.
- Various dimension reduction algorithms may be used: Laplacian Eigenmaps, Locally Linear Embeddings, Isomap, and semi-definite embedding[4].
- Two-step alignments[5][6], as opposed to the standard one-step approach.
- Instance vs feature level projections.
- Alignment with different amounts of correspondence information: Supervised, semi-supervised[7], and unsupervised[8] learning, as well as learning from labelled data.
- Alignment at multiple scales[9], using diffusion wavelet trees.
Applications
Manifold alignment is suited to problems with several corpora that lie on a shared manifold, especially when each corpus is of a different dimensionality. Examples include:
- Cross-language information retrieval / automatic translation
- Transfer learning of policy and state representations
References
- ^ Ham, Ji Hun (2003). "Learning high dimensional correspondences from low dimensional manifolds" (PDF). Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003). Washington DC, USA.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Hotelling, H (1936). "Relations between two sets of variates" (PDF). Biometrika vol. 28, no. 3/4.
- ^ Belkin, M (2003). "Laplacian eigenmaps for dimensionality reduction and data representation" (PDF). Neural computation.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Xiong, L. (2007). "Semi-definite manifold alignment". In Proceedings of the 18th European Conference on Machine Learning.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Lafon, Stephane (2006). "Data fusion and multicue data matching by diffusion maps" (PDF). IEEE transactions on Pattern Analysis and Machine Intelligence.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Wang, Chang (2008). "Manifold Alignment using Procrustes Analysis" (PDF). The 25th International Conference on Machine Learning.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Ham, Ji Hun (2005). "Semisupervised alignment of manifolds" (PDF). Proceedings of the Annual Conference on Uncertainty in Artificial Intelligence.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Wang, Chang (2009). "Manifold Alignment without Correspondence" (PDF). The 21st International Joint Conference on Artificial Intelligence.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Wang, Chang (2010). "Multiscale Manifold Alignment" (PDF). Univ. of Massachusetts TR UM-CS-2010-049.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help)
Further Reading
- Ma, Yunqian (Apr 15, 2012). Manifold Learning Theory and Applications. Taylor & Francis Group. p. 376. ISBN 1439871094.
- Wang, Chang (2009). "A General Framework for Manifold Alignment" (PDF). AAAI Fall Symposium on Manifold Learning and its Applications.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - Wang, Chang (2011). "Heterogeneous Domain Adaptation using Manifold Alignment" (PDF). The 22nd International Joint Conference on Artificial Intelligence.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help)