Emti's photo

Emtiyaz Khan

I am a team leader (equivalent to Full Professor) at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where I lead the Approximate Bayesian Inference (ABI) Team. From April 2018, I am a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT), and also a part-time lecturer at Waseda University. I am an Action Editor for the Journal of Machine Learning (JMLR). From 2014 to 2016, I was a scientist at EPFL in Matthias Grossglausser's lab. During my time at EPFL, I taught two large machine learning courses for which I received a teaching award. I first joined EPFL as a post-doc with Matthias Seeger in 2013 and before that I finished my PhD at UBC in 2012 under the supervision of Kevin Murphy.

Contact: emtiyaz [at] gmail.com [or] emtiyaz.khan [at] riken.jp

Research Publications Teaching People News Blog


We have many Open positions.

(July 16,17,18, 2018) Talks at University of Cambridge, University of Oxford, and DeepMind London [ Slides ]

(July 12, 2018) Talks at ICML 2018 [ Slides ]

(July 5, 2018) Talk at TU Berlin in Klaus-Robert Muller on Natural-Gradient Variational Inference [ Slides ]

(July 2, 2018) Talk at DTU Copenhagen in Ole Winther's group on Natural-Gradient Variational Inference [ Slides ]

(June 28-29, 2018) A 2-day tutorial on Approximate Bayesian Inference at the DS3 workshop.

(June 27, 2018) Talk at ENSAE Paris on Natural-Gradient Variational Inference.

(June 23, 2018) A lecture on Approximate Bayesian Inference at Waseda University.

(June 6, 2018) A lecture on Bayesian Deep Learning at University of Tokyo.

(Apr, 2018) I am teaching a course on Fundamentals of Machine Learning in April at Waseda University.

(Mar 19, 2018) I gave an invited Talk at the Tokyo Deep Learning Workshop.

I am an area chair for NIPS 2018, ICML 2018, and ACML 2018, and a reviewer for UAI 2018. In 2018, I have reviewed 25 papers so far. In 2017, I reviewed 54 papers!


"My main goal is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings."

My current focus is on algorithms that can mimic human learning by sequentially exploring and collecting experiences about the world. With these new algorithms, I aim to solve many existing issues in deep learning, e.g., make them data-efficient and improve their rate of convergence and robustness.

Research Highlights

Current projects

Algorithms to improve SGD's performance in deep learning. First-order methods such as SGD converge slowly and require a huge amount of good-quality data for training. In this project, we aim to improve their performance by using an uncertainty-based exploration. This paper contains preliminary results for an exploration-based Newton method. A complete version of this paper will be available later in 2018.

Scalable algorithms for Bayesian deep learning. Bayesian deep learning aims to combine ideas from Bayesian inference for deep generative models (such as Bayesian neural nets). Our goal in this project is to design scalable algorithms to compute reliable but quick uncertainty estimates. We propose Vprop to compute weight uncertainty in deep neural networks using RMSprop. Another work derives a fast variational message passing algorithm for structured VAEs. Also, see short paper1 and paper2 for preliminary results. More results are expected in 2018.

Variational inference for large and complex models. Our goal is to build scalable algorithms for large and complex Bayesian models, e.g., spatio-temporal models. We hope to build software tools that can be run in GPU clusters easily, thereby helping practitioners be able to use such models for large data. Our methods are based on this AI-stats 2017 paper. A long version of this paper will be available around summer of 2018.

Sequential learning by exploration. Humans can efficiently explore their surroundings and collect relevant experiences to improve their understanding of the world. We aim to design machines that can mimic this process. Our goal eventually is to learn deep structured models using sequential exploration using ideas from Bayesian inference and reinfrocement learning. This is a long-term project and we expect to deliver some results by the end of year 2018.

[application] Machine learning for the design of high-performance buildings. This project aims to explore the application of machine learning to the design, renovation, and operation of high-performance or sustainable buildings. Buildings are complex, dynamic systems that usually operate under conditions that are both hard to predict and control. However, a huge amount of knowledge about building performance is encoded in physical simulator (using PDEs), and even though simulators alone may not lead to accurate predictions, we would like to exploit this knowledge. This project aims to improve the performance of simulators by using generative models, for example, by generating realistic weather patterns for simulations. See this paper for the first set of results where we used GPs to predict outputs of simulators. A journal version is under submission and should be available by early 2018.

[application] Automating Data Science. This projects aims to develop machine learning algorithms that help data analysis for scientific research. Such scientific researches usually involve generation of hypothesis that can be tested automatically using the data in hand. Unfortunately, both the scientists carrying out the analysis and the algorithms used for analysis can be biased and wrong (although in different ways). This project aims to combine the strength of human data-analyst with that of the algorithm to improve the overall outcome. This is a long-term project and we expect some outcomes in the next two years.