Sampling Methods for Uncertainty Quantification

CAP 6938 (Spring 2026)

Department of Computer Science
University of Central Florida
Orlando, FL, USA

Instructor information

Instructor: Ali Siahkoohi

See webcourses for the office hour location.

Description

This special topics course introduces modern computational sampling methods for uncertainty quantification in scientific computing and engineering applications. Topics include mathematical and computational principles of classical and contemporary sampling approaches with emphasis on understanding the inner workings of deep generative models, variational inference, and simulation-based inference methods. Students implement sampling algorithms and apply them to uncertainty quantification problems through hands-on programming assignments.

Why take this course?

AI models have recently driven significant advances in various science and engineering domains, yet critical reliability concerns remain: models produce unreliable predictions, lack quantifiable safety guarantees, and their theoretical foundations remain opaque to many practitioners. Current approaches focused on scaling and post-hoc validation cannot systematically address these issues. In this course, you will learn modern uncertainty quantification techniques that not only can be applied widely in various science and engineering domains but also hold the key to designing more reliable AI models. You will go beyond treating generative models, which are key components of modern sampling and uncertainty quantification techniques, as black boxes, understanding and implementing the core mathematical principles that make them work. Through five hands-on programming assignments, you will build these algorithms from the ground up, gaining the deep foundational expertise needed to adapt, debug, and innovate in this rapidly evolving field.

Prerequisites

Students should have background in probability theory, linear algebra, and programming. Familiarity with deep learning basics is recommended but not required.

Textbooks

There is no required textbook for the class. A list of recommended papers will be provided during the course.

Grading

  • Programming assignments 60% (5 assignments × 12% each)
  • Paper presentation 25%
  • Lecture scribing 15% (number of scribing assignments will depend on total class enrollment)
  • There will be 2% extra credit for submitting teaching evaluation, if more than 80% of students submitted the evaluation

Course schedule

Course material will be posted on the website before each lecture. The instructor reserves the right to alter this schedule as needed.

Week Module Lecture Topics Resources Assignments/Notes
Week 1-2 Module 1: Foundations Uncertainty quantification in scientific computing
Review of probability
Monte Carlo integration and rejection sampling
What is UQ
Rethinking Uncertainty
Information Adequacy Illusion
You’re Confidently Wrong
Confidence Misleads Supervised OOD Detection
Mathematics for Machine Learning
MLK Day - No Class on Jan 19
Scribing instructions and template (pdf, zip)
Notes on Probability
Notes on Linear Algebra
Notes on Norms
Notes on Calculus
Notes on Linear/Quadratic Gradients
Notes on Max and Argmax
Rejection Sampling Notes
Assignment instructions
Assignment 1 a1.zip
Week 3-4 Module 2: MCMC Importance sampling and introduction to MCMC: Metropolis-Hastings
Gibbs sampling and convergence diagnostics
Gradient-based MCMC (Langevin dynamics and MALA)
Hamiltonian Monte Carlo fundamentals
Handbook of Markov Chain Monte Carlo
Understanding the Metropolis-Hastings Algorithm
Explaining the Gibbs Sampler
Markov Chain Monte Carlo Lecture Notes
MCMC Slides
On Thinning of Chains in MCMC
Practical Markov Chain Monte Carlo
Stochastic Gradient Langevin Dynamics
Preconditioned SGLD
Hamiltonian Monte Carlo
A Conceptual Introduction to Hamiltonian Monte Carlo
MCMC Interactive Gallery
Assignment 1 due
Assignment 2 a2.zip
Week 5-7 Module 3: Variational Inference Introduction to variational inference and KL divergence
Reparameterization and the ELBO
Stochastic, non-amortized, and amortized variational inference
Particle-based variational inference and Stein’s identity
Stein Variational Gradient Descent
VI vs MCMC: tradeoffs, diagnostics, and hybrid methods
VI Review for Statisticians
Advances in VI
Auto-Encoding Variational Bayes
Monte Carlo Gradient Estimation
Reliable Amortized VI
Stein Variational Gradient Descent
VI with Normalizing Flows
Yes, but Did It Work? Evaluating VI
Pareto Smoothed Importance Sampling
Importance Weighted Autoencoders
Practical Posterior Error Bounds from VI Objectives
Transport Map Accelerated MCMC
Assignment 2 due
Assignment 3 a3.zip
Week 8-9 Module 4: Deep Generative Models Normalizing flows: theory and architectures
Variational autoencoders and amortized inference
Score matching and diffusion models
Flow matching and continuous normalizing flows
Assignment 4
Assignment 3 due
Week 10-11 Module 5: Simulation-Based Inference Likelihood-free inference and Approximate Bayesian Computation (ABC)
Neural posterior estimation and neural likelihood estimation
Sequential methods and active learning
Practical implementation and diagnostic tools
Assignment 5
Assignment 4 due
Week 12-13 Instructor-Guided Paper Discussions Instructor curates and leads discussions on 4-6 recent research papers Assignment 5 due
Week 14-15 Student Paper Presentations Student presentations on research papers (selected from curated list or instructor-approved)
Critical analysis and discussion of recent developments
Advanced topics and cutting-edge developments in sampling methodology
Paper presentations