I have multiple open positions for post-docs, research assistants, and interns in my team. Please email me if you are interested. You might also want to see this advert for more details on how to apply.
(Mar 19, 2018) I gave an invited Talk at the Tokyo Deep Learning Workshop.
(Feb 23, 2018) New paper on Variational Message Passing for Structured VAEs.
(Feb 20, 2018) New paper on Bayesian nonparametric Poisson-Process Allocation for Time-Sequence Modeling.
(Dec. 4, 2017) New paper on Vprop for variational inference using RMSprop's implementation.
(Nov. 15, 2017) New paper on Variational Adaptive-Newton (VAN) method, a general-purpose optimization method.
I am an area chair for NIPS 2018, ICML 2018, and ACML 2018, and a reviewer for UAI 2018. In 2018, I have reviewed 25 papers so far. In 2017, I reviewed 54 papers!
"My main goal is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings."
My current focus is on algorithms that can mimic human learning by sequentially exploring and collecting experiences about the world. With these new algorithms, I aim to solve many existing issues in deep learning, e.g., make them data-efficient and improve their rate of convergence and robustness.
Algorithms to improve SGD's performance in deep learning. First-order methods such as SGD converge slowly and require a huge amount of good-quality data for training. In this project, we aim to improve their performance by using an uncertainty-based exploration. This paper contains preliminary results for an exploration-based Newton method. A complete version of this paper will be available later in 2018.
Scalable algorithms for Bayesian deep learning. Bayesian deep learning aims to combine ideas from Bayesian inference for deep generative models (such as Bayesian neural nets). Our goal in this project is to design scalable algorithms to compute reliable but quick uncertainty estimates. We propose Vprop to compute weight uncertainty in deep neural networks using RMSprop. Another work derives a fast variational message passing algorithm for structured VAEs. Also, see short paper1 and paper2 for preliminary results. More results are expected in 2018.
Variational inference for large and complex models. Our goal is to build scalable algorithms for large and complex Bayesian models, e.g., spatio-temporal models. We hope to build software tools that can be run in GPU clusters easily, thereby helping practitioners be able to use such models for large data. Our methods are based on this AI-stats 2017 paper. A long version of this paper will be available around summer of 2018.
Sequential learning by exploration. Humans can efficiently explore their surroundings and collect relevant experiences to improve their understanding of the world. We aim to design machines that can mimic this process. Our goal eventually is to learn deep structured models using sequential exploration using ideas from Bayesian inference and reinfrocement learning. This is a long-term project and we expect to deliver some results by the end of year 2018.
[application] Machine learning for the design of high-performance buildings. This project aims to explore the application of machine learning to the design, renovation, and operation of high-performance or sustainable buildings. Buildings are complex, dynamic systems that usually operate under conditions that are both hard to predict and control. However, a huge amount of knowledge about building performance is encoded in physical simulator (using PDEs), and even though simulators alone may not lead to accurate predictions, we would like to exploit this knowledge. This project aims to improve the performance of simulators by using generative models, for example, by generating realistic weather patterns for simulations. See this paper for the first set of results where we used GPs to predict outputs of simulators. A journal version is under submission and should be available by early 2018.
[application] Automating Data Science. This projects aims to develop machine learning algorithms that help data analysis for scientific research. Such scientific researches usually involve generation of hypothesis that can be tested automatically using the data in hand. Unfortunately, both the scientists carrying out the analysis and the algorithms used for analysis can be biased and wrong (although in different ways). This project aims to combine the strength of human data-analyst with that of the algorithm to improve the overall outcome. This is a long-term project and we expect some outcomes in the next two years.