I recently gave a talk at the first Deep Learning London meetup, on March 26, 2014. I took this opportunity to prepare a detailed tutorial on the fundamental building block of deep learning algorithms, the auto-encoder. My slides were mostly based on the analysis of Marc’Aurelio Ranzato‘s 2007 NIPS paper on “Sparse Feature Learning for Deep Belief Networks“, which he co-authored with Y-Lan Boureau and our PhD advisor Yann LeCun. I reimplemented their algorithm and experiments in Matlab/Octave, reproducing their results on learning a two-layer stacked sparse auto-encoder that models the MNIST handwritten digits dataset.
The Matlab/Octave code and data can be accessed on my github:
The tutorial slides (PiotrMirowski_2014_ReviewAutoEncoders) contain detailed explanations about the learning and inference in auto-encoders, complete with Matlab code. They also mention some interesting applications of auto-encoders to text, such as Hinton and Salakhutdinov’s seminal paper on semantic hashing, Marc’Aurelio’s ICML 2008 paper on semi-supervised learning of auto-encoders for document retrieval and our NIPS Deep Learning Workshop 2010 paper on Dynamic Auto-Encoders for Semantic Indexing. Finally, I also mention the recent and very impressive work at Microsoft Research by Li Deng, Xiaodong He and Jianfeng Gao‘s team, who depart from the auto-encoder paradigm to learn query and document embeddings from pair-wise web clickthrough data, using Deep Structured Semantic Models.
4 thoughts on “Tutorial on auto-encoders”
Reblogged this on Data Science and Big Data Analytics in practise and commented:
Started out with deep learning, a good starting point thanks Piotr!
Nice talk – thanks Piotr.
Comments are closed.