Emti's photo

Emtiyaz Khan

I am a team leader (equivalent to Full Professor) at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where I lead the Approximate Bayesian Inference (ABI) Team. From April 2018, I am a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT). I am an Action Editor for the Journal of Machine Learning (JMLR). From 2014 to 2016, I was a scientist at EPFL in Matthias Grossglausser's lab. During my time at EPFL, I taught two large machine learning courses for which I received a teaching award. I first joined EPFL as a post-doc with Matthias Seeger in 2013 and before that I finished my PhD at UBC in 2012 under the supervision of Kevin Murphy.

emtiyaz [at] gmail.com [or] emtiyaz.khan [at] riken.jp

Research Publications Teaching People News Blog



"My main goal is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings."

My current focus is on sequential learning and exploration. I am working on problems in several areas of machine learning, such as approximate inference, deep learning, reinforcement learning, active learning, online learning, and reasoning in computer vision. In my recent works, I have worked on ideas from a wide range of fields, such as, optimization, Bayesian statistic, information geometry, signal processing, and control systems.

Research Highlights

Current projects (slightly outdated)

Algorithms to improve SGD's performance in deep learning. First-order methods such as SGD converge slowly and require a huge amount of good-quality data for training. In this project, we aim to improve their performance by using an uncertainty-based exploration. This paper contains preliminary results for an exploration-based Newton method. A complete version of this paper will be available later in 2018.

Scalable algorithms for Bayesian deep learning. Bayesian deep learning aims to combine ideas from Bayesian inference for deep generative models (such as Bayesian neural nets). Our goal in this project is to design scalable algorithms to compute reliable but quick uncertainty estimates. We propose Vprop and Variational Adam to compute weight uncertainty in deep neural networks using RMSprop and Adam respectively. Another work derives a fast variational message passing algorithm for structured VAEs.

Variational inference for large and complex models. Our goal is to build scalable algorithms for large and complex Bayesian models, e.g., spatio-temporal models. We hope to build software tools that can be run in GPU clusters easily, thereby helping practitioners be able to use such models for large data. Our methods are based on this AI-stats 2017 paper. A short review paper summarizing the results is published at ISITA 2018. A long version of this paper will be available later in 2018.

Sequential learning by exploration. Humans can efficiently explore their surroundings and collect relevant experiences to improve their understanding of the world. We aim to design machines that can mimic this process. Our goal eventually is to learn deep structured models using sequential exploration using ideas from Bayesian inference and reinfrocement learning. This is a long-term project and we expect to deliver some results by the end of year 2018.

[application] Machine learning for the design of high-performance buildings. This project aims to explore the application of machine learning to the design, renovation, and operation of high-performance or sustainable buildings. Buildings are complex, dynamic systems that usually operate under conditions that are both hard to predict and control. However, a huge amount of knowledge about building performance is encoded in physical simulator (using PDEs), and even though simulators alone may not lead to accurate predictions, we would like to exploit this knowledge. This project aims to improve the performance of simulators by using generative models, for example, by generating realistic weather patterns for simulations. See this paper for the first set of results where we used GPs to predict outputs of simulators. A journal version is under submission and should be available by early 2018.

[application] Automating Data Science. This projects aims to develop machine learning algorithms that help data analysis for scientific research. Such scientific researches usually involve generation of hypothesis that can be tested automatically using the data in hand. Unfortunately, both the scientists carrying out the analysis and the algorithms used for analysis can be biased and wrong (although in different ways). This project aims to combine the strength of human data-analyst with that of the algorithm to improve the overall outcome. This is a long-term project and we expect some outcomes in the next two years.


I am an Area Chair for NeurIPS 2019, ICML 2019 (7 papers), and a reviewer for ICASSP 2019. In 2018, I reviewed around 70 papers, and in 2017, I reviewed 54 papers.