Sampling Methods for Uncertainty Quantification

CAP 6938 (Spring 2026)

Department of Computer Science
University of Central Florida
Orlando, FL, USA

Instructor information

Instructor: Ali Siahkoohi

See webcourses for the office hour location.

Description

This special topics course introduces modern computational sampling methods for uncertainty quantification in scientific computing and engineering applications. Topics include mathematical and computational principles of classical and contemporary sampling approaches with emphasis on understanding the inner workings of deep generative models, variational inference, and simulation-based inference methods. Students implement sampling algorithms and apply them to uncertainty quantification problems through hands-on programming assignments.

Why take this course?

AI models have recently driven significant advances in various science and engineering domains, yet critical reliability concerns remain: models produce unreliable predictions, lack quantifiable safety guarantees, and their theoretical foundations remain opaque to many practitioners. Current approaches focused on scaling and post-hoc validation cannot systematically address these issues. In this course, you will learn modern uncertainty quantification techniques that not only can be applied widely in various science and engineering domains but also hold the key to designing more reliable AI models. You will go beyond treating generative models, which are key components of modern sampling and uncertainty quantification techniques, as black boxes, understanding and implementing the core mathematical principles that make them work. Through five hands-on programming assignments, you will build these algorithms from the ground up, gaining the deep foundational expertise needed to adapt, debug, and innovate in this rapidly evolving field.

Prerequisites

Students should have background in probability theory, linear algebra, and programming. Familiarity with deep learning basics is recommended but not required.

Textbooks

There is no required textbook for the class. A list of recommended papers will be provided during the course.

Grading

  • Programming assignments 60% (5 assignments × 12% each)
  • Paper presentation 25%
  • Lecture scribing 15% (number of scribing assignments will depend on total class enrollment)
  • There will be 2% extra credit for submitting teaching evaluation, if more than 80% of students submitted the evaluation

Course schedule

Course material will be posted on the website before each lecture. The instructor reserves the right to alter this schedule as needed.

Week Module Lecture Topics Resources Assignments/Notes
Week 1-2 Module 1: Foundations Uncertainty quantification in scientific computing
Review of probability
Monte Carlo integration and rejection sampling
What is UQ
Rethinking Uncertainty
Information Adequacy Illusion
You’re Confidently Wrong
Confidence Misleads Supervised OOD Detection
Mathematics for Machine Learning
MLK Day - No Class on Jan 19
Scribing instructions and template (pdf, zip)
Notes on Probability
Notes on Linear Algebra
Notes on Norms
Notes on Calculus
Notes on Linear/Quadratic Gradients
Notes on Max and Argmax
Rejection Sampling Notes
Assignment instructions
Assignment 1 a1.zip
Week 3-4 Module 2: MCMC Importance sampling and introduction to MCMC: Metropolis-Hastings
Gibbs sampling and convergence diagnostics
Gradient-based MCMC (Langevin dynamics and MALA)
Hamiltonian Monte Carlo fundamentals
Handbook of Markov Chain Monte Carlo
Understanding the Metropolis-Hastings Algorithm
Explaining the Gibbs Sampler
Markov Chain Monte Carlo Lecture Notes
MCMC Slides
On Thinning of Chains in MCMC
Practical Markov Chain Monte Carlo
Stochastic Gradient Langevin Dynamics
Preconditioned SGLD
Hamiltonian Monte Carlo
A Conceptual Introduction to Hamiltonian Monte Carlo
MCMC Interactive Gallery
Assignment 1 due
Assignment 2 a2.zip
Week 5-7 Module 3: Variational Inference Introduction to variational inference and KL divergence
Reparameterization and the ELBO
Stochastic, non-amortized, and amortized variational inference
Particle-based variational inference and Stein’s identity
Stein Variational Gradient Descent
VI vs MCMC: tradeoffs, diagnostics, and hybrid methods
VI Review for Statisticians
Advances in VI
Auto-Encoding Variational Bayes
Monte Carlo Gradient Estimation
Reliable Amortized VI
Stein Variational Gradient Descent
VI with Normalizing Flows
Yes, but Did It Work? Evaluating VI
Pareto Smoothed Importance Sampling
Importance Weighted Autoencoders
Practical Posterior Error Bounds from VI Objectives
Transport Map Accelerated MCMC
Assignment 2 due
Assignment 3 a3.zip
Week 8-10 Module 4: Deep Generative Models Normalizing flows: theory and architectures
Normalizing flows continued
Variational autoencoders and amortized inference
Score matching and diffusion models
Score-based models and flow matching
Flow matching continued
Normalizing Flows Survey
Real-NVP
Masked Autoregressive Flow
Universal Approximation for Coupling Flows
Householder Flow
HINT: Hierarchical Invertible Neural Transport
InvertibleNetworks.jl (JOSS)
InvertibleNetworks.jl (GitHub)
STARFlow
Introduction to Variational Autoencoders
Denoising Diffusion Probabilistic Models
Score Matching with Langevin Dynamics
Score-Based Generative Modeling through SDEs
Diffusion Objectives as the ELBO
Flow Matching for Generative Modeling
Stochastic Interpolants
Diffusion Meets Flow Matching
Improving and Generalizing Flow-Based Generative Models
Spring Break - No Class (Mar 16-20)
Assignment 3 due
Assignment 4 a4.zip
Week 11-12 Module 5: Simulation-Based Inference Likelihood-free inference and Approximate Bayesian Computation (ABC)
Neural approaches to SBI: NPE, NLE, and NRE
Sequential simulation-based inference
The Frontier of Simulation-Based Inference
Fast Epsilon-Free Inference
Automatic Posterior Transformation
Sequential Neural Likelihood
Likelihood-Free MCMC with Amortized Ratios
Mining Gold from Simulators
Simulation-Based Calibration
Benchmarking SBI
sbi: A Toolkit for SBI
Truncated Proposals for Scalable SBI
Classifier Two-Sample Tests
BayesFlow
Information Maximising Neural Networks
Neural Approximate Sufficient Statistics
Optimal Data Compression for SBI
Likelihood Ratios with Calibrated Classifiers
ASPIRE: Iterative Amortized Posterior Inference
Contrastive Learning for Likelihood-free Inference
Mixture Density Networks
Flexible Statistical Inference for Mechanistic Models
A Kernel Two-Sample Test (MMD)
Assignment 4 due
Assignment 5 a5.zip
Week 13-15 Student Paper Presentations Rethinking aleatoric and epistemic uncertainty
Convex potential flows
Neural spline flows
Black box variational inference
Scaling rectified flow transformers
Elucidating the design space of diffusion models
Energy-based transformers
Flow matching on general geometries
Adaptive symmetrization of the KL divergence
Normalizing flows are capable generative models
Unifying summary statistics for ABC
Compositional score modeling for SBI
Contrastive neural ratio estimation
Assignment 5 due
Grading rubric
List of papers