(Sep. 5, 2018) Talk at the UK-Japan AI Workshop at the UK Embassy in Tokyo [ Slides ]
(July 24, 2018) Talk at the First conference on Discrete Optimization and Machine Learning in Tokyo [ Slides ]
(July 16,17,18, 2018) Talks at University of Cambridge, University of Oxford, and DeepMind London [ Slides ]
(July 12, 2018) Talk at ICML 2018 [ Slides ]
(July 5, 2018) Talk at TU Berlin in Klaus-Robert Muller on Natural-Gradient Variational Inference [ Slides ]
(July 2, 2018) Talk at DTU Copenhagen in Ole Winther's group on Natural-Gradient Variational Inference [ Slides ]
I am an area chair for AI-Stats 2019, ICLR 2019, NIPS 2018, ICML 2018, and ACML 2018, and a reviewer for UAI 2018. In 2018, I have reviewed 45 papers so far. In 2017, I reviewed 54 papers!
"My main goal is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings."
My current focus is on algorithms that can mimic human learning by sequentially exploring and collecting experiences about the world. With these new algorithms, I aim to solve many existing issues in deep learning, e.g., make them data-efficient and improve their rate of convergence and robustness.
Algorithms to improve SGD's performance in deep learning. First-order methods such as SGD converge slowly and require a huge amount of good-quality data for training. In this project, we aim to improve their performance by using an uncertainty-based exploration. This paper contains preliminary results for an exploration-based Newton method. A complete version of this paper will be available later in 2018.
Scalable algorithms for Bayesian deep learning. Bayesian deep learning aims to combine ideas from Bayesian inference for deep generative models (such as Bayesian neural nets). Our goal in this project is to design scalable algorithms to compute reliable but quick uncertainty estimates. We propose Vprop and Variational Adam to compute weight uncertainty in deep neural networks using RMSprop and Adam respectively. Another work derives a fast variational message passing algorithm for structured VAEs.
Variational inference for large and complex models. Our goal is to build scalable algorithms for large and complex Bayesian models, e.g., spatio-temporal models. We hope to build software tools that can be run in GPU clusters easily, thereby helping practitioners be able to use such models for large data. Our methods are based on this AI-stats 2017 paper. A short review paper summarizing the results is published at ISITA 2018. A long version of this paper will be available later in 2018.
Sequential learning by exploration. Humans can efficiently explore their surroundings and collect relevant experiences to improve their understanding of the world. We aim to design machines that can mimic this process. Our goal eventually is to learn deep structured models using sequential exploration using ideas from Bayesian inference and reinfrocement learning. This is a long-term project and we expect to deliver some results by the end of year 2018.
[application] Machine learning for the design of high-performance buildings. This project aims to explore the application of machine learning to the design, renovation, and operation of high-performance or sustainable buildings. Buildings are complex, dynamic systems that usually operate under conditions that are both hard to predict and control. However, a huge amount of knowledge about building performance is encoded in physical simulator (using PDEs), and even though simulators alone may not lead to accurate predictions, we would like to exploit this knowledge. This project aims to improve the performance of simulators by using generative models, for example, by generating realistic weather patterns for simulations. See this paper for the first set of results where we used GPs to predict outputs of simulators. A journal version is under submission and should be available by early 2018.
[application] Automating Data Science. This projects aims to develop machine learning algorithms that help data analysis for scientific research. Such scientific researches usually involve generation of hypothesis that can be tested automatically using the data in hand. Unfortunately, both the scientists carrying out the analysis and the algorithms used for analysis can be biased and wrong (although in different ways). This project aims to combine the strength of human data-analyst with that of the algorithm to improve the overall outcome. This is a long-term project and we expect some outcomes in the next two years.