Farhad Pourkamali Anaraki
Abstract
Neural network regression models often provide only point estimates, offering no insight into the confidence associated with their predictions. This tutorial-style talk introduces modern approaches for estimating predictive uncertainty in neural networks, beginning with model-free methods such as quantile regression and progressing to probabilistic neural networks (PNNs) that explicitly predict parameters of output distributions using maximum likelihood estimation and evidential approaches that infer distributional uncertainty directly from data. While Gaussian-based PNNs are widely used, they often produce overly broad prediction intervals and are sensitive to outliers or heavy-tailed data. To address these challenges, we present our recent work on t-Distributed Neural Networks, which model outputs using the Student’s t-distribution parameterized by location, scale, and degrees of freedom, enabling adaptive handling of heavy-tailed distributions and improving robustness to non-Gaussian data.
When: January 30, 2026
Where: North Classroom 1535
Time: 11:00 am - 12:00pm
