Keynote talks at INNS Big Data 2016 and at the London ML Meetup

I recently gave a keynote talk on”Learning Sequences” at the INNS 2016 conference on Big Data on 24 October 2016. The conference took place in beautiful Thessaloniki, Greece and was organised by Prof. Plamen Angelov and Yannis Manolopoulos. I have also been interviewed about DeepMind and autonomous agents by Dr. Simone Scardapane. The talk covered the principles of recurrent neural networks, of Long Short-Term Memory and memory architectures, with applications to natural language processing and to playing 3D games with partially observed information. The slides of my talk about  are available here.

A few weeks later, I gave a condensed version of that talk at the London ML Meetup. The updated slides covered extra material related to my current work at DeepMind on Learning to Navigate in Complex Environments.

Keynote talk about learning representations of text 
for natural language processing

I just had the immense honour to be invited to and to deliver a keynote talk on deep learning for natural language processing at the 2015 International Conference on Systems, Man and Cybernetics in Hong Kong. Among the other keynote speakers was Prof. Jack Gallant, from UC Berkeley, who presented his pioneering work on neuroimaging, such as fMRI, for studying how the human brain represents and processes sensory and cognitive information reading. The chair of my session was Dr. Tin Kam Ho, my former boss at Bell Labs and a truly inspiring mentor and scientist who, among others, invented the random decision forests algorithm, and who currently works at IBM Watson.

The slides of my presentations are available here: PiotrMirowski_2015_LearningRepresentationsOfTextForNLP_Online

Computational Intelligence Unconference 2014

I recently attended and gave a talk at the well-organized and thought-provoking Computational Intelligence Unconference 2014 which took place at the BT Centre in London. The organizer, Daniel John Lewis, a bright and entrepreneurial PhD student at the University of Bristol, gathered an interesting crowd including the founders of the Human Memome Project and Rohit Talwar from Fast Future Research who talked about the ethics of AI technological advancement.

With the help of Dirk Gorissen (who organizes the excellent Deep Learning London Meetups), I gave a repeat of my talk on auto-encoders (slides). The Matlab code that accompanies the tutorial can be accessed at my previous post.

Meetup talk on neural language models and word embeddings

Dirk Gorissen, who among many other things, organizes the London Big-O Algorithms Meetup series, invited me to give a talk at that meetup last Wednesday. I presented a shortened version of the extended tutorial on neural probabilistic language models that I gave last April at UCL. The slides are available here and the abstract follows below. The other speaker was Dominic Steinitz, who presented a great hands-on tutorial on Gibbs sampling.

A recent development in statistical language modeling happened when distributional semantics (the idea of defining words in relation to other words) met neural networks. Classical language modelling consists in assigning probabilities to sentences by factorizing the joint likelihood of the sentence into conditional likelihoods of each word given that word’s context. Neural language models further try to “embed” each word into a low-dimensional vector-space representation that can be learned as the language model is trained. When they are trained on very large corpora, these models can achieve state-of-the-art performance in many applications such as speech recognition or sentence completion.

Tutorial on neural language models

I gave today an extended tutorial on neural probabilistic language models and their applications to distributional semantics (slides available here). The talk took place at University College London (UCL), as part of the South England Statistical NLP Meetup @ UCL, which is organized by Prof. Sebastian Riedel, the Lecturer who is heading the UCL Machine Reading research group, and by Dr. Andreas Vlachos, who is currently doing his post-doc at UCL.

The talk covered the recent development in statistical language models, that go beyond n-grams and that build on distributional semantics (language modeling consists in assigning probabilities to sentences by factorizing the joint likelihood of the sentence into conditional likelihoods of a word given the word’s history). In these new language models, also called continuous space language models or neural probabilistic language models, each word is “embedded” into a low-dimensional vector-space representation that is learned as the language model is trained. By relying on very large corpora (as large as millions or billions of words, such as the 3.2GB English Wikipedia corpus), these models achieve state-of-the-art perplexity and word error rates. Starting from neural probabilistic language models, I presented their extensions, including recurrent neural networks, log-bilinear models and continuous bags of words, mentioned the Microsoft Sentence Completion challenge and illustrated how these models were able to preserve semantic linguistic regularities such as:
{king} – {man} + {woman} = {queen}.

Tutorial on auto-encoders

I recently gave a talk at the first Deep Learning London meetup, on March 26, 2014. I took this opportunity to prepare a detailed tutorial on the fundamental building block of deep learning algorithms, the auto-encoder. My slides were mostly based on the analysis of Marc’Aurelio Ranzato‘s 2007 NIPS paper on “Sparse Feature Learning for Deep Belief Networks“, which he co-authored with Y-Lan Boureau and our PhD advisor Yann LeCun. I reimplemented their algorithm and experiments in Matlab/Octave, reproducing their results on learning a two-layer stacked sparse auto-encoder that models the MNIST handwritten digits dataset.

The Matlab/Octave code and data can be accessed on my github:
https://github.com/piotrmirowski/Tutorial_AutoEncoders

The tutorial slides (PiotrMirowski_2014_ReviewAutoEncoders) contain detailed explanations about the learning and inference in auto-encoders, complete with Matlab code. They also mention some interesting applications of auto-encoders to text, such as Hinton and Salakhutdinov’s seminal paper on semantic hashing, Marc’Aurelio’s ICML 2008 paper on semi-supervised learning of auto-encoders for document retrieval and our NIPS Deep Learning Workshop 2010 paper on Dynamic Auto-Encoders for Semantic Indexing. Finally, I also mention the recent and very impressive work at Microsoft Research by Li Deng, Xiaodong He and Jianfeng Gao‘s team, who depart from the auto-encoder paradigm to learn query and document embeddings from pair-wise web clickthrough data, using Deep Structured Semantic Models.

About me (2014)

I am a research scientist in deep learning and recently joined Google DeepMind on a quest to artificial general intelligence.

I am passionate about deep learning, in particular recurrent neural networks, statistical language models and time series analysis – passions I acquired during my PhD at New York University under the guidance of neural network guru Yann LeCun. I particularly love to solve real world problems and have focused my studies on several different projects, including epileptic seizure prediction from EEG and the inference of gene regulation networks. In my previous research appointment as a member of technical staff at Bell Labs (2011-2013, working with the amazing Tin Kam Ho, who among other things pioneered Random Decision Forests), I helped deliver an electric load forecasting solution to a power utility, hacked a Kinect-powered mobile robot that mapped the WiFi signal strength (for indoor geo-localization) and coded Simultaneous Localization and Mapping from the pocket (reconstructing a 3D trajectory using only sensors present on a smartphone). Prior to Google DeepMind, I worked as a data scientist and software engineer at Microsoft Bing in London (2013-2014), crunching big data problems, and employing deep learning and natural language processing methods for enhancing search query formulation.

I love backpacking around the world, opera, theatre and improv theatre acting. Before moving to London, I used to be part of New York-based volunteering improv comedy group Cherub Improv.