Manifold learning is an umbrella term for research directions and methods that try to benefit from the possibility that the data come from a lower-dimensional submanifold embedded in a higher-dimensional space. The hope is that by exploiting this kind of regularity, using methods from the mathematics of differential manifolds, we have data analysis methods that efficiently work in problems with high-dimensional input space.
In the first part of my talk, I introduce some prominent manifold learning methods such as Isomap, Locally Linear Embedding, and Laplacian Eigenmaps. These are basically nonlinear dimension reduction methods.
In the second part of the talk, I represent my work and show that there are certain machine learning methods that can provably benefit from the fact that the data are lying on a lower-dimensional submanifold.