Contact: emtiyaz [at] gmail.com [or] emtiyaz.khan [at] riken.jp

I have multiple open positions for post-docs, research assistants, and interns in my team. Please email me if you are interested. You might also want to see this advert for more details on how to apply.

(Apr, 2018) I will give a course on Fundamentals of Machine Learning in April at Waseda University.

(Mar 19, 2018) I gave an invited Talk at the Tokyo Deep Learning Workshop.

(Mar 8, 2018) I visited Srijith P. K. at IIT, Hyderbad, and gave a talk there.

(Feb 23, 2018) New paper on Variational Message Passing for Structured VAEs.

(Feb 20, 2018) New paper on Bayesian nonparametric Poisson-Process Allocation for Time-Sequence Modeling.

(Dec. 4, 2017) New paper on Vprop for variational inference using RMSprop's implementation.

(Dec. 1, 2017) Our new method CVI is implemented in the GPML toolbox. Thanks to Hannes Nickisch.

(Nov. 15, 2017) New paper on Variational Adaptive-Newton (VAN) method, a general-purpose optimization method.

I am an area chair for NIPS 2018, ICML 2018, and ACML 2018, and a reviewer for UAI 2018. In 2018, I have reviewed 25 papers so far. In 2017, I reviewed 54 papers!

*"My main goal is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings."*

My current focus is on algorithms that can mimic human learning by sequentially exploring and collecting experiences about the world. With these new algorithms, I aim to solve many existing issues in deep learning, e.g., make them data-efficient and improve their rate of convergence and robustness.

- Variational Message Passing for Structured VAEs (code here)
- Variational Adaptive-Newton (VAN): black-box optimization using exploration.
- Vprop for variational inference using RMSprop's implementation.
- Conjugate-Computation Variational Inference (CVI) : Natural-gradient descent for variational inference by using mirror descent (code for Logistic Reg and GP classification, also in GPML toolbox and for Correlated Topic Models).
- Applications: GPs for building design, Baysian optimization with UAVs, Automatic privacy permission for Smart-phones, Analysis of vote results, and Preference learning.

Algorithms to improve SGD's performance in deep learning. First-order methods such as SGD converge slowly and require a huge amount of good-quality data for training. In this project, we aim to improve their performance by using an uncertainty-based exploration. This paper contains preliminary results for an exploration-based Newton method. A complete version of this paper will be available later in 2018.

Scalable algorithms for Bayesian deep learning. Bayesian deep learning aims to combine ideas from Bayesian inference for deep generative models (such as Bayesian neural nets). Our goal in this project is to design scalable algorithms to compute reliable but quick uncertainty estimates. We propose Vprop to compute weight uncertainty in deep neural networks using RMSprop. Another work derives a fast variational message passing algorithm for structured VAEs. Also, see short paper1 and paper2 for preliminary results. More results are expected in 2018.

Variational inference for large and complex models. Our goal is to build scalable algorithms for large and complex Bayesian models, e.g., spatio-temporal models. We hope to build software tools that can be run in GPU clusters easily, thereby helping practitioners be able to use such models for large data. Our methods are based on this AI-stats 2017 paper. A long version of this paper will be available around summer of 2018.

Sequential learning by exploration. Humans can efficiently explore their surroundings and collect relevant experiences to improve their understanding of the world. We aim to design machines that can mimic this process. Our goal eventually is to learn deep structured models using sequential exploration using ideas from Bayesian inference and reinfrocement learning. This is a long-term project and we expect to deliver some results by the end of year 2018.

[application] Machine learning for the design of high-performance buildings. This project aims to explore the application of machine learning to the design, renovation, and operation of high-performance or sustainable buildings. Buildings are complex, dynamic systems that usually operate under conditions that are both hard to predict and control. However, a huge amount of knowledge about building performance is encoded in physical simulator (using PDEs), and even though simulators alone may not lead to accurate predictions, we would like to exploit this knowledge. This project aims to improve the performance of simulators by using generative models, for example, by generating realistic weather patterns for simulations. See this paper for the first set of results where we used GPs to predict outputs of simulators. A journal version is under submission and should be available by early 2018.

[application] Automating Data Science. This projects aims to develop machine learning algorithms that help data analysis for scientific research. Such scientific researches usually involve generation of hypothesis that can be tested automatically using the data in hand. Unfortunately, both the scientists carrying out the analysis and the algorithms used for analysis can be biased and wrong (although in different ways). This project aims to combine the strength of human data-analyst with that of the algorithm to improve the overall outcome. This is a long-term project and we expect some outcomes in the next two years.

- Didrik Nielsen (Research Assistant)
- Si Kai Lee (Research Assistant)
- Dr. Parag Rastogi (Visiting Scientist, University of Strathclyde)
- Hanna Tseran (Intern from University of Tokyo).
- Aaron Mishkin (Intern from UBC).
- Frederik Kunster (Intern from EPFL).

- Wu Lin (Now PhD student in UBC)
- Nicolas Hubacher (Past Research Assistant)
- Zuozhu Liu (Intern from SUTD during June-Dec 2017).
- Vaden Masrani (Intern from UBC during May-Oct 2017)
- Salma El Aloui (Intern from École Polytechnique during Jun-Sep 2017)
- Kimia Nadjahi (Intern from ENS Cachan during May-Sep 2017)
- Arnaud Robert (Intern from EPFL during Oct 2016 to April 2017)
- Heiko Strathman (from UCL)

- Yarin Gal (from Oxford University between Feb. 12-18, 2018)
- Akash Srivastava (from University of Edinburgh, between Jan. 28, 2017 - Feb 14, 2018)
- Mark Schmidt (from UBC between Aug 14-25, 2017)
- Yarin Gal (from Cambridge University between Aug. 17-25, 2017)
- Mauricio A Alvarez (from University of Sheffield, between Nov. 18-25, 2017)
- Thang Bui (from University of Cambridge between Oct. 2-20, 2017)
- Mark Schmidt (from UBC between Aug 14-25, 2017)
- Yarin Gal (from Cambridge University between Aug. 17-25, 2017)
- Prof. Havard Rue (from KAUST from June 19-25, 2017)
- Prof. Arnaud Doucet (from Oxford University on April 26, 2017)
- Prof. Marco Cuturi (from Ecole des Mines, France on April 18, 2017)
- Dr. Parag Rastogi (from Glasgow in March 2017)
- Heiko Strathman (from UCL in March 2017)
- Maja Rudolph (in March 2017 from Columbia University)

- (April, 2018) Fundamentals of Machine Learning at Waseda University (8 lectures).
- (April 19, 2017) A lecture on Modern Methods for Approximate Bayesian Inference for the course "Special topics in mechano-informatics" at the University of Tokyo. Download slides and their annotated version.
- (Jan 20, 2017) A lecture on Fundamentals of Machine Learning in "Lecture on computer science" at the University of Tokyo on .
- (Fall, 2015) Pattern Classification and Machine Learning, EPFL.
- (Aug. 25, 2015) Short course on "Fundamentals of ML" in Zurich.
- (Fall, 2014) Pattern Classification and Machine Learning, EPFL.

- (Dec 2017) In 2017, I was an area chair for NIPS 2017, a reviewer for ICLR-2018, AAAI-2018, ICML-2017 and UAI-2017, and an action-editor for JMLR. I reviewed a total 54 papers in 2017!
- (Nov 18, 2017) Mauricio A Alvarez visited from University of Sheffield, between Nov. 18-25, 2017.
- (Oct 2, 2017) Thang Bui visited from University of Cambridge between Oct. 2-20, 2017.
- (Aug 17, 2017) Mark Schmidt (UBC) and Yarin Gal (Cambridge University) visited my group.
- (Aug 4, 2017) New paper on Structured Inference-Networks for Structured deep-models in ICML workshop DeepStruct
- (July 24, 2017) I gave a talk at ERATO in Tokyo on Aug. 3, 2017
- (July 1, 2017) I gave a talk at ATR in Kyoto on July 10, 2017
- (June 25, 2017) Salma El Aloui (from École Polytechnique) and Zuozhu Liu (from Singapore University of Technology and Design) join as interns.
- (June 19, 2017) Prof. Havard Rue from KAUST visiting from Jun 19-25.
- (Apr 19, 2017) I gave a lecture at the University of Tokyo on Modern Approximate Bayesian Inference Methods. Download slides and their annotated version.
- (Apr 18, 2017) Prof. Marco Cuturi and Prof. Shun-ichi Amari visited AIP and gave talks about their work on Wasserstein distance.
- (Apr 17, 2017) Vaden Masrani (from UBC) and Kimia Nadjahi (from ENS Cachan) joined as interns in my group.
- (Mar 24, 2017) Heiko Strathmann from UCL is visiting from March 24-31, 2017.
- (Feb 27, 2017) Maja Rudolph from Columbia University visited AIP in March, 2017.
- (Feb 22, 2017) New talk at the PGM workshop 2017 in ISM about "Conjugate-Computation Variational Inference".
- (Jan 2016) I presented a poster at the Winter-Festa (YouTube link and "hand-made" poster).
- (Dec. 2016) New Paper at Bayesian Deep Learning workshop in NIPS 2016 for inference in Deep Exp-Family Models.
- (Oct-2016) I became a Team-Leader in Tokyo at RIKEN's newly established Center for Advanced Intelligence Project (AIP).
- (Aug-2016) A new paper at DSAA, 2016.
- (20-Dec-2015) A new paper at UAI, 2016.
- (10-Dec-2015) I got the teaching award for 2015!
- (06-Dec-2015) I am at NIPS 2015.
- (23-Oct-2015) Talk at Amazon, Berlin.
- (20-Oct-2015) Talk at TU, Berlin.
- (11-Oct-2015) I am a reviewer for AI-Stats 2016.
- (28-Sep-2015) I gave a talk at the theory seminar in EPFL about my research.
- (18-Sep-2015) I have a new paper in NIPS 2015 on "KL Proximal Variational Inference".
- (10-Sep-2015) I visited Frank Hutter in Freibourg and gave a talk there.
- (07-Sep-2015) I gave a talk about my work in NTNU, Norway.
- (25-Aug-2015) I offered a short course on "Fundamentals of ML" on August 25, 2015 in Zurich . More than 200 people registered and around 120 people attended.
- (07-Aug-2015) Course webpage for PCML is available.
- (15-Jul-2015) I attended ICML 2015 in Lille.
- (30-May-2015) I visited Masashi Sugiyama's lab in University of Tokyo from March-May, 2015.
- (Apr-2015) I am an area chair for NIPS 2015.
- (Feb-2015) I am now a 'scientific collaborator' at EPFL.
- (Dec-2014) I have a new paper at NIPS 2014. Unfortunately, I couldn't attend due to visa issues.
- (Dec-2014) I taught Pattern Classification and Machine Learning in EPFL from Sep 2014 to Feb 2015. The course had a total 190 Master level students and received a rating of median 5 out of 6. Young-Jun presented our paper in ACML 2014.
- (Aug-2014) I gave a talk in Shogun-workshop in July 2014 (video link).
- (May-2014) I presented our paper in AI-Stats-2014 (video link).
- (May-2014) I mentored a project on variational inference for Google-Summer-of-Code-2014, along with Heiko Strathmann. Check out the Notebook outlining the project for Shogun toolbox.
- (Feb 2014) I joined as a post-doc with Matthias Grossglauser at LCA lab in EPFL.
- (Sep-2013) I gave an invited talk at the LGM-2013 workshop in Iceland.
- (Nov 2012) I joined as a post-doc with Matthias Seeger at LAPMAL lab in EPFL.
- (11 Mar 2012) Invited talks at EPFL, XRCE, and INRIA-SIERRA [ slides ].
- (08 Feb 2012) A tutorial report on Approximate message passing from my talk on DNOISE.
- (29 Sep 2011) Talk at Microsoft Research, Redmond [ video ] [ slides ]
- (29 Jun 2011) Talk at ICML 2011 [ video ] [ slides ]
- (22 Apr 2011) Derivation of an EM algorithm for Latent Gaussian Model with Gaussian Likelihood [ pdf ]
- (14 Sep 2009) Derivation of Variational EM algorithm for Correlated Topic Model [ pdf ]
- (25 Feb 2009) Derivation of Gaussian likelihood with Gaussian prior on mean [ pdf ]
- (29 Jan 2009) A note on empirical Bayes estimate of Covariance for Multivariate Normal Distribution [ pdf ]
- (24 Dec 2008) Tech report on Bayesian search algorithms for decomposable Guassian graphical model [ pdf ]
- (27 Feb 2008) Updating Inverse of a Matrix when a Column is added/removed [ pdf ] [ code ]
- (25 Feb 2008) Talk on Kalman filter and demo code [ Slides ] [ Demo ]
- (25 Feb 2008) Notes on information filter [ pdf ]
- (30 Oct 2007) Presentation on Variational Bayes and Message passing at Machine learning Reading Group [ slides ]
- (02 Oct 2007) A note on Exchangeability, Polya’s Urn, and De-Finetti’s Theorem [ pdf ]
- (28 Sep 2007) Linear Algebra Tutorial [ Outline ] [ slides ]
- (18 Sep 2007) Probability Tutorial [ Outline ] [ Slides ]
- (14 June 2007) Talk on Brain-Computer Interface,
*CIFAR Time-series Workshop, Toronto*[ slides ] - (18 May 2007) Talk on Signal Compression and JPEG,
*UDLS*[ Abstract ] [ slides ] - (April 2007) Compressed Sensing, Compressed Classification and Joint Signal Recovery,
*Machine Learning course project*[ pdf ] - (April 2007) Gibbs Sampling for the Probit Regression Model with Gaussian Markov Random Field Latent Variables,
*Statistical Computation course project*[ pdf ] [ slides ] - (26 Jan 2007) Talk on "Introduction to probability theory,
*UDLS*[ slides ] [ Abstract ] - (Dec 2007) Game theory models for Pursuit-evasion games,
*Multi-agent systems course project*[ pdf ] - (Dec 2007) An incremental deployment algorithm for mobile sensors,
*Optimization course project*[ pdf ]