How to Weight Multitask Finetuning? Fast Previews via Bayesian Model-Merging, H. M. Maldonado, T. Möllenhoff, N. Daheim, I. Gurevych,M.E. Khan
[ ArXiv ]
2024
Variational Low-Rank Adaptation Using IVON, (Fine-Tuning in Modern ML (FITML) at NuerIPS 2024)B. Cong, N. Daheim, Y. Shen, D. Cremers, R. Yokota, M.E. Khan, T. Möllenhoff
[ OpenReview ] [ ArXiv ]
Variational Learning is Effective for Large Deep Networks, (ICML 2024) Y. Shen*, N. Daheim*, B. Cong, P. Nickl, G.M. Marconi, C. Bazan, R. Yokota, I. Gurevych, D. Cremers,M.E. Khan, T. Möllenhoff
[ ArXiv ] [ Blog ] [ Code ]
Accepted as spotlight
Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI, (ICML 2024) With many authors [ ArXiv ]
Model Merging by Uncertainty-Based Gradient Matching, (ICLR 2024) N. Daheim, T. Möllenhoff, E. M. Ponti, I. Gurevych,M.E. Khan
[ ArXiv ] [ OpenReview ]
Conformal Prediction via Regression-as-Classification, (ICLR 2024) E. K. Guha, S. Natarajan, T. Möllenhoff,M.E. Khan,E. Ndiaye
[ OpenReview ] [ ArXiv ] [ Code ]
2023
Improving Continual Learning by Accurate Gradient Reconstructions of the Past, (TMLR) E. Daxberger, S. Swaroop, K. Osawa, R. Yokota, R. turner, J. M. Hernández-Lobato,M.E. Khan [ OpenReview ]
The Bayesian Learning Rule, (JMLR)M.E. Khan, H. Rue [ JMLR 2023 ]
[ arXiv ] [ Tweet ]
The Memory Perturbation Equation: Understanding Model’s Sensitivity to Data, (NeurIPS 2023) P. Nickl, L. Xu, D. Tailor, T. Möllenhoff,M.E. Khan
[ arXiv ]
Bridging the Gap Between Target Networks and Functional Regularization, (TMLR) A. Piché, V. Thomas, R. Pardinas, J. Marino, G. M. Marconi, C. Pal, M.E. Khan
[ Openreview ]
Variational Bayes Made Easy, (AABI 2023)M.E. Khan
[arXiv]
Exploiting Inferential Structure in Neural Processes, (UAI 2023)D. Tailor, M.E. Khan, E. Nalisnick
[arXiv]
Simplifying Momentum-based Riemannian Submanifold Optimization with Applications to Deep Learning, (ICML 2023)W. Lin, V. Duruisseaux, M. Leok, F. Nielsen, M.E. Khan, M. Schmidt
[ ArXiv ]
Memory-Based Dual Gaussian Processes for Sequential Learning, (ICML 2023)P. E. Chang, P. Verma, S. T. John, A. Solin,M.E. Khan
[ ArXiv ]
Accepted for an oral presentation
Lie-Group Bayesian Learning Rule, (AISTATS 2023)E. M. Kiral T. Möllenhoff, M.E. Khan
[ arXiv ]
SAM as an Optimal Relaxation of Bayes, (ICLR 2023) T. Möllenhoff, M.E. Khan
[ arXiv ] [ Tweet ] Accepted for an oral presentation, 5% of accepted papers (75 out of 5000 submissions),
2022
Sequential Learning in GPs with Memory and Bayesian Leverage Score, (Continual Lifelong Workshop at ACML 2022)P. Verma, P. E. Chang, A. Solin, M.E. Khan
[ OpenReview ]
Practical Structured Riemannian Optimization with Momentum by using Generalized Normal Coordinates, (NeuReps Workshop at NeurIPS 2022)W. Lin, V. Duruisseaux, M. Leok, F. Nielsen, M.E. Khan, M. Schmidt
[ OpenReview ]
Can Calibration Improve Sample Prioritization?, (HITY Workshop at NeurIPS 2022)G. Tata, G. K. Gudur, G. Chennupati, M.E. Khan
[ OpenReview ]
Dual Parameterization of Sparse Variational Gaussian Processes, (NeurIPS 2021)P. Chang, V. ADAM, M.E. Khan, A. Solin
[ arXiv ] [ Code ]
Subset-of-Data Variational Inference for Deep Gaussian-Process Regression, (UAI 2021) A. Jain, P.K. Srijith, M.E. Khan
[ arXiv ] [ Code ]
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning, (ICML 2021) A. Immer, M. Bauer, V. Fortuin, G. Ratsch, M.E. Khan,
[ arXiv ]
Tractable structured natural gradient descent using local parameterizations, (ICML 2021) W. Lin, F. Nielsen, M.E. Khan, M. Schmidt
[ arXiv ]
Learning Algorithms from Bayesian Principles, (Draft)M.E. Khan, H. Rue
[ Draft version ]
[ Full version on arXiv ] under a new title "The Bayesian Learning Rule"
Continual Deep Learning by Functional Regularisation of Memorable Past (NeurIPS 2020)P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner,M.E. Khan
[ arXiv ] [ Code ] [ Poster ] Accepted for an oral presentation, 1% of all submisssions (105 out of 9454 submissions),
Fast Variational Learning in State-Space Gaussian Process Models , (MLSP 2020)P. E. Chang, W. J. Wilkinson, M.E. Khan, A. Solin
[ arXiv ]
Training Binary Neural Networks using the Bayesian Learning Rule, (ICML 2020) X. Meng, R. Bachmann, M.E. Khan
[ arXiv ] [ Code ]
Handling the Positive-Definite Constraint in the Bayesian Learning Rule, (ICML 2020) W. Lin, M. Schmidt, M.E. Khan
[ arXiv ]
VILD: Variational Imitation Learning with Diverse-quality Demonstrations, (ICML 2020) V. Tangkaratt, B. Han, M.E. Khan, M. Sugiyama
[ arXiv ]
Exact Recovery of Low-rank Tensor Decomposition under Reshuffling, (AAAI 2020)C. Li, M.E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, Q. Zhao
[ arXiv ]
2019
Practical Deep Learning with Bayesian Principles, (NeurIPS 2019)K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
[ arXiv ] [ Code ]
Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019)M.E. Khan, A. Immer, E. Abedi, M. korzepa.
[ arXiv ] [ Code ]
A Generalization Bound for Online Variational Inference (best paper award), (ACML 2019) Badr-Eddine Chérief-Abdellatif, Pierre Alquier, M.E. Khan.
[ arXiv ]
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient, (NeurIPS 2018)A. Miskin, F. Kunstner, D. Nielsen, M. Schmidt, M.E. Khan.
[ arXiv ]
[ Poster]
[ 3-min Video]
[ Code ]
Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models, (ISITA 2018)M.E. Khan and D. Nielsen,
[ arXiv ] [ IEEE explore ] [ Slides ]
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam, (ICML 2018)M.E. Khan, D. Nielsen, V. Tangkaratt, W. Lin, Y. Gal, and A. Srivastava,
[ arXiv Version ]
[ Code ]
[ Slides ]
Variational Message Passing with Structured Inference Networks, (ICLR 2018) W. Lin, N. Hubacher, and M.E. Khan,
[ Paper ]
[ arXiv Version ]
[ Code ]
Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling, (AI-Stats 2018) H. Ding, M.E. Khan, I. sato, M. Sugiyama,
[ Paper ]
[ Appendix ]
[ Code ]
SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices, (38th IEEE Symposium on Security and Privacy (S&P;), San Jose, CA, USA, May 22-24, 2017) K. Olejnik, I. I. Dacosta Petrocelli, J. C. Soares Machado, K. Huguenin,M.E. Khan, and J.-P. Hubaux
[ Paper ]
[ Code ]
[ SmarPer Homepage ]
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions, (UAI 2016)M.E. Khan, R. Babanezhad, W. Lin, M. Schmidt, M. Sugiyama
[ Paper + Appendix ]
[ Code ]
Online Collaborative Prediction of Regional Vote Results, (IEE Intenational Conference on Data Science and
Advanced analytics (DSAA) 2016) V. EtterM.E. Khan, M. Grossglausser, and P. Thiran
[ Paper ]
[ Code & Data ]
Variational Inference on Deep Exponential Family by using Variational Inferences on Conjugate Models, (Workshop on Bayesian Deep Learning at NIPS 2016) M.E. Khanand Wu Lin
[ Paper ]
2015
Kullback-Leibler Proximal Variational Inference, (NIPS 2015)M.E. Khan, P. Baque, F. Fleuret, P. Fua.
[ Paper ] [ Appendix ]
Convergence of Proximal-Gradient Stochastic Variational Inference under Non-Decreasing Step-Size Sequence, (NIPS 2015, Workshop on Advances in Approximate Bayesian Inference) M.E. Khan, R. Babanezhad, W. Lin, M. Schmidt, M. Sugiyama
[ NIPS workshop version ] [ arXiv ]
UAVs using Bayesian Optimization to Locate WiFi Devices, (NIPS 2015, BayesOpt Workshop)M. Carpin, S. Rosati, M.E. Khan, B. Rimoldi
[ NIPS workshop version ] [ arXiv ]
2014
Decoupled Variational Gaussian Inference, (NIPS 2014)M.E. Khan
[ Paper and appendix ]
Variational Gaussian Inference for Bilinear Models of Count Data, (ACML 2014)Y.J. Ko, M.E. Khan
[ Paper ]
Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models, (ICML 2013)M.E. Khan, A. Aravkin, M. Friedlander, M. Seeger
[ Paper ]
2012
Variational Learning for Latent Gaussian Models of Discrete Data, (PhD thesis)M.E. Khan [ Link to pdf ]
Fast Bayesian Inference for Non-Conjugate Gaussian Process Regression, (NIPS 2012)M.E. Khan, S. Mohamed, K. Murphy [ pdf ] [ poster ] [ code ]
Large-scale Approximate Bayesian Inference for Exponential Family Latent Gaussian Models, (ISBA 2012, poster)M.E. Khan, S. Mohamed, K. Murphy
A Stick-Breaking Likelihood for Categorical Data Analysis with Latent Gaussian Models, (AIstats 2012)M.E. Khan, S. Mohamed, B. Marlin, K. Murphy [ pdf ] [ poster ] [ code ] [ datasets]
2011
Piecewise Bounds for Estimating Bernoulli-Logistic Latent Gaussian Models, (ICML 2011)B. Marlin, M. E. Khan, K. Murphy [ pdf ] [ presentation ] [ poster ] [ code ] [ appendix ]
2010
Variational Bounds for Mixed-Data Factor Analysis, (NIPS 2010)M. E. Khan, B. Marlin, G. Bouchard, K. Murphy [ pdf ] [ poster ] [ MATLAB code ] [ corrected version ] Our implementation for mixture model had a bug, the corrected version contains new results.
2009
Accelerating Bayesian Structural Inference for Non-decomposable Gaussian Graphical Model, (NIPS 2009, Oral)B. Mogaddham, B. Marlin,M. E. Khan, K. Murphy [ pdf ] [ poster ] [ MATLAB code (Ben’s website) ]
Before 2008
An Expectation-Maximization Algorithm Based Kalman Smoother Approach for Event-Related Desynchronization (ERD) Estimation from EEG, (IEEE Transactions on Biomedical Engineering, 2007)M.E. Khan,D. N. Dutt [ pdf ] [ MATLAB code ]
State Estimation with Wireless Devices, Third Int. Conf. Intelligent Sensing and Information Processing (ICISIP), 2005 M.E. Khan,H. Raghavan, J. Brahmajosyula, S. K. Ramalingam, S. Narasimhan [ pdf ]
Hybrid System Framework for State Estimation in Systems with Wireless Devices, Annual Meeting of American Institute of Chemical Engineers (AIChe), 2005 M.E. Khan, H. Raghavan, J. Brahmajosyula, S.K. Ramalingam, S. Narasimhan
Expectation-Maximization (EM) Algorithm for Instantaneous Frequency Estimation with Kalman Smoother(EUSIPCO 2004)M. E. Khan, D.N. Dutt [ pdf ]
Estimation of ERS/ERD with Kalman Smoother: An EM Algorithm Approach, BIOSIGNAL 2004M. E. Khan, D. N. Dutt
  Writings and Presentations by Emtiyaz Khan
Variational Methods for Discrete-Data Latent Gaussian Models (11 Mar 2012) Invited talks at EPFL, XRCE, and INRIA-SIERRA
[ slides ].
A Tutorial on Approximate Message Passing (08 Feb 2012)talk at DNOISE
[ Report ].
Piecewise Bounds for Estimating Discrete-Data Latent Gaussian Models (29 Sep 2011)Talk at Microsoft Research, Redmond
[ video ] [ slides ]
Piecewise Bounds for Estimating Bernoulli-Logistic Latent Gaussian Models (29 Jun 2011)Talk at ICML 2011
[ slides ]
An Expectation-Maximization algorithm for Learning the Latent-Gaussian Model with Gaussian Likelihood (22 Apr 2011)Derivation of a simple factor analysis model
[ pdf ]
Variational EM Algorithms for Correlated Topic Models (14 Sep 2009)Derivation of Variational EM algorithm for Correlated Topic Model
[ pdf ]
Bayesian Inference for a model with Gaussian likelihood and Gaussian prior on the mean of the likelihood (25 Feb 2009)
[ pdf ]
Empirical Bayes estimate of Covariance for Multivariate Normal Distribution (29 Jan 2009)
[ pdf ]
Bayesian search algorithms for decomposable Guassian graphical model (24 Dec 2008)
[ pdf ]
Updating Inverse of a Matrix when a Column is added/removed (27 Feb 2008)
[ pdf ] [ code ]
Kalman Filters (25 Feb 2008)Slides from my talk at the Dynamic Programming course at UBC
[ Slides ] [ Demo ]
Matrix Inversion Lemma and Information Filter (25 Feb 2008)Deriving information filter by applying matrix inversion lemma to Kalman filters
[ pdf ]
Variational Bayes and Variational Message Passing (30 Oct 2007)Presentation at the Machine learning Reading Group at UBC
[ slides ]
Exchangeability, Polya’s Urn, and De-Finetti’s Theorem (02 Oct 2007)
[ pdf ]
A review of Linear Algebra (28 Sep 2007)Presented for the refresher courses at Computer Science, UBC
[ slides ]
A review of basic probability theory (18 Sep 2007)Presented for the refresher courses at Computer Science, UBC
[ Slides ]
Brain-Computer Interface: Overview, Methods, and Opportunities (14 June 2007)Talk at the CIFAR Time-series Workshop, University of Toronto
[ slides ]
Talk on Signal Compression and JPEG (18 May 2007)Presented at the Undistinguished Lecture Series (UDLS), CS, UBC
[ Abstract ] [ slides ]
Compressed Sensing, Compressed Classification and Joint Signal Recovery (April 2007)Course project-report for the Machine Learning course taught by Nando D. Frietas
[ pdf ]
Gibbs Sampling for the Probit Regression Model with Gaussian Markov Random Field Latent Variables (April 2007)Project report for the Statistical-Computation course taught by Arnaud Doucet
[ pdf ] [ slides ]
A review of basic probability theory (Jan 2007)Presented at the Undistinguished Lecture Series (UDLS), CS, UBC
[ Slides ]
Game theory models for Pursuit-evasion games (Dec 2007)Course-project report for the Multi-agent systems course taught by Kevin Leyton-Brown
[ pdf ]
An incremental deployment algorithm for mobile sensors (Dec 2007)Course-project report for the Numerical Optimization course taught by Michael Friedlander
[ pdf ]