Variational Uncertainty Decomposition for In-Context Learning
WHEN
27 June 2025
Friday
10.00am - 11.00am Australia/Sydney
WHERE
City campus
CB11.06.408 (Room 408, Level 06, Building 11)
COST
Free admission
CONTACT
Email Dr Junyu Xuan
Variational Uncertainty Decomposition for In-Context Learning
As large language models (LLMs) gain popularity in conducting prediction tasks in-context, understanding the sources of uncertainty in in-context learning becomes essential to ensuring reliability.
The recent hypothesis of in-context learning performing predictive Bayesian inference opens the avenue for Bayesian uncertainty estimation, particularly for decomposing uncertainty into epistemic uncertainty due to lack of in-context data and aleatoric uncertainty inherent in the in-context prediction task.
However, the decomposition idea remains under-explored due to the intractability of the latent parameter posterior from the underlying Bayesian model.
In this work, we introduce a variational uncertainty decomposition framework for in-context learning without explicitly sampling from the latent parameter posterior, by optimising auxiliary inputs as probes to obtain an upper bound to the aleatoric uncertainty of an LLM's in-context learning procedure.
Through experiments on synthetic and real-world tasks, we show quantitatively and qualitatively that the decomposed uncertainties obtained from our method exhibit desirable properties of epistemic and aleatoric uncertainty.
Speaker
Dr Yingzhen Li is an Associate Professor in Machine Learning at the Department of Computing, Imperial College London, UK. Before that she was a senior researcher at Microsoft Research Cambridge, and previously she has interned at Disney Research. She received her PhD in engineering from the University of Cambridge, UK.
Yingzhen is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. She has worked extensively on approximate inference methods with applications to Bayesian deep learning and deep generative models, and her work has been applied in industrial systems and implemented in deep learning frameworks (e.g. Tensorflow Probability and Pyro).
She regularly gives tutorials and lectures on probabilistic ML and generative models at machine learning research summer schools, and she gave an invited tutorial on Advances in Approximate Inference at NeurIPS 2020.
She was a co-organiser of the Advances in Approximate Bayesian Inference (AABI) symposium in 2020-2023, as well as many NeurIPS/ICML/ICLR workshops on topics related to probabilistic learning. She is a Program Chair for AISTATS 2024 and serve as a General Chair for AISTATS 2025 and 2026. Her work on Bayesian ML has also been recognised in AAAI 2023 New Faculty Highlights.