Sampling Methods for Uncertainty Quantification
CAP 6938 (Spring 2026)
Department of Computer Science
University of Central Florida
Orlando, FL, USA
Instructor and TA information
- Instructor: Ali Siahkoohi
-
- Email: alisk@ucf.edu
- Office hours: Wednesdays 11–12pm.
See webcourses for the office hour location.
Description
This special topics course introduces modern computational sampling methods for uncertainty quantification in scientific computing and engineering applications. Topics include mathematical and computational principles of classical and contemporary sampling approaches with emphasis on understanding the inner workings of deep generative models, variational inference, and simulation-based inference methods. Students implement sampling algorithms and apply them to uncertainty quantification problems through hands-on programming assignments.
Why take this course?
AI models have recently driven significant advances in various science and engineering domains, yet critical reliability concerns remain: models produce unreliable predictions, lack quantifiable safety guarantees, and their theoretical foundations remain opaque to many practitioners. Current approaches focused on scaling and post-hoc validation cannot systematically address these issues. In this course, you will learn modern uncertainty quantification techniques that not only can be applied widely in various science and engineering domains but also hold the key to designing more reliable AI models. You will go beyond treating generative models, which are key components of modern sampling and uncertainty quantification techniques, as black boxes, understanding and implementing the core mathematical principles that make them work. Through five hands-on programming assignments, you will build these algorithms from the ground up, gaining the deep foundational expertise needed to adapt, debug, and innovate in this rapidly evolving field.
Prerequisites
Students should have background in probability theory, linear algebra, and programming. Familiarity with deep learning basics is recommended but not required.
Textbooks
There is no required textbook for the class. A list of recommended papers will be provided during the course.
Grading
- Programming assignments 60% (5 assignments × 12% each)
- Paper presentation 25%
- Lecture scribing 15% (number of scribing assignments will depend on total class enrollment)
- There will be 2% extra credit for submitting teaching evaluation, if more than 80% of students submitted the evaluation
Course schedule
Course material will be posted on the website before each lecture. The instructor reserves the right to alter this schedule as needed.
| Week | Module | Lecture Topics | Resources | Assignments/Notes |
|---|---|---|---|---|
| Week 1-2 | Module 1: Foundations | Uncertainty quantification in scientific computing Review of probability Monte Carlo integration and rejection sampling |
What is UQ Rethinking Uncertainty Information Adequacy Illusion You’re Confidently Wrong Confidence Misleads Supervised OOD Detection Mathematics for Machine Learning |
MLK Day - No Class on Jan 19 Scribing instructions and template (pdf, zip) Notes on Probability Notes on Linear Algebra Notes on Norms Notes on Calculus Notes on Linear/Quadratic Gradients Notes on Max and Argmax Rejection Sampling Notes Assignment instructions Assignment 1 a1.zip |
| Week 3-4 | Module 2: MCMC | Importance sampling and introduction to MCMC: Metropolis-Hastings Gibbs sampling, gradient-based MCMC (Langevin dynamics and MALA), and convergence diagnostics Hamiltonian Monte Carlo fundamentals No-U-Turn Sampler (NUTS) Adaptive MCMC methods Practical considerations and debugging strategies |
Handbook of Markov Chain Monte Carlo Understanding the Metropolis-Hastings Algorithm Explaining the Gibbs Sampler Markov Chain Monte Carlo Lecture Notes MCMC Slides On Thinning of Chains in MCMC Practical Markov Chain Monte Carlo Hamiltonian Monte Carlo A Conceptual Introduction to Hamiltonian Monte Carlo MCMC Interactive Gallery |
Assignment 2 Assignment 1 due |
| Week 5-7 | Module 3: Variational Inference | Variational inference and the Evidence Lower Bound (ELBO) Variational autoencoders for uncertainty representation Amortized inference and reparameterization trick Stein Variational Gradient Descent (SVGD) and particle-based methods VI vs MCMC tradeoffs and diagnostics |
Assignment 3 Assignment 2 due |
|
| Week 8-9 | Module 4: Deep Generative Models | Normalizing flows for exact density modeling Coupling layers and autoregressive flows Invertible neural networks Diffusion models and score-based generative models Maximum likelihood perspective on flow and diffusion models |
Assignment 4 Assignment 3 due |
|
| Week 10-11 | Module 5: Simulation-Based Inference | Likelihood-free inference and Approximate Bayesian Computation (ABC) Neural posterior estimation and neural likelihood estimation Sequential methods and active learning Practical implementation and diagnostic tools |
Assignment 5 Assignment 4 due |
|
| Week 12-13 | Instructor-Guided Paper Discussions | Instructor curates and leads discussions on 4-6 recent research papers | Assignment 5 due | |
| Week 14-15 | Student Paper Presentations | Student presentations on research papers (selected from curated list or instructor-approved) Critical analysis and discussion of recent developments Advanced topics and cutting-edge developments in sampling methodology |
Paper presentations |