Diffusion models, which convert noise into new data instances by learning to reverse a Markov diffusion process, have become a cornerstone in generative AI. Motivated by the practical success, there has been a surge of recent activities in leveraging diffusion models as an expressive data prior for solving ill-posed inverse problems. However, the theoretical underpinnings are still largely unsettled regarding the quality of the reconstruction. In this talk, we provide theoretical guarantees on how diffusion-based solvers can converge to the posterior distribution of low-dimensional data priors in polynomial steps for linear inverse problems. In addition, we introduce a provably robust method to perform posterior sampling that can seamlessly be applied to a wide variety of nonlinear inverse problems. Along the way, we will discuss applications to illustrate the promise of diffusion models in scientific applications.
Since July 2025, Yuejie Chi is the Charles C. and Dorothea S. Dilley Professor of Statistics and Data Science at Yale University, with a secondary appointment in Computer Science. She also spent some time as a visiting researcher at FAIR. Before joining Yale, she was the Sense of Wonder Group Endowed Professor of Electrical and Computer Engineering in AI Systems at Carnegie Melon University, with affiliation in MLD and CyLab. Her research interests lie in the theoretical and algorithmic foundations of data science, generative AI, reinforcement learning, and signal processing, motivated by applications in scientific and engineering domains. The problems her group studies are often interdisciplinary in nature, lying at the intersection of statistics, learning, optimization, and sensing. Her current focus is on improving the performance, efficiency and reliability of generative AI and decision making, driven by data-intensive but resource-constrained scenarios. Specific lines of research topics can be found here. She have been lucky to receive a couple of awards for her work, including Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House. She is the inaugural recipient of the IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. In addition, she received SIAM Activity Group on Imaging Science Best Paper Prize, IEEE Signal Processing Society Young Author Best Paper Award, and young investigator awards from several agencies including NSF, ONR and AFOSR. She is an IEEE Fellow for contributions to statistical signal processing with low-dimensional structures. She was named the Goldsmith Lecturer by IEEE Information Theory Society in 2021, a Distinguished Lecturer by IEEE Signal Processing Society for 2022-2023, and a Distinguished Speaker by ACM for 2023-2026.
Nonlinear Random Matrices in Estimation and Learning: Equivalence Principles and Applications
In recent years, new classes of structured and nonlinear random matrices have emerged in statistical estimation and machine learning. Understanding their spectral properties has become increasingly important, as these matrices are closely linked to key quantities such as the training and generalization performance of large neural networks and the fundamental limits of high-dimensional signal recovery. Unlike classical random matrix ensembles, these new matrices often involve nonlinear transformations, introducing additional structural dependencies that pose challenges for traditional analysis techniques.
In this talk, I will present a set of equivalence principles that establish asymptotic connections between various nonlinear random matrix ensembles and simpler linear models that are more tractable for analysis. I will then demonstrate how these principles can be applied to characterize the performance of kernel methods and random feature models across different scaling regimes and to provide insights into the in-context learning capabilities of attention-based Transformer networks.
Yue M. Lu is a Harvard College Professor and Gordon McKay Professor of Electrical Engineering and Applied Mathematics at Harvard University. He has also held visiting appointments at Duke University (2016) and the Ecole Normale Superieure (ENS) in Paris (2019). His research focuses on the mathematical foundations of high-dimensional statistical estimation and learning. His contributions have been recognized with several best paper awards (IEEE ICIP, ICASSP, and GlobalSIP), the ECE Illinois Young Alumni Achievement Award (2015), and the IEEE Signal Processing Society Distinguished Lecturership (2022). He is a Fellow of the IEEE (Class of 2024).
Bridging High-Dimensional Statistics and Decentralized Optimization: New Perspectives on Inference over Networks
The rapid proliferation of decentralized network architectures, such as mesh networks, has sparked growing interest in efficiently solving large-scale statistical learning tasks, especially when data is inherently distributed and lacks centralized oversight. Performing accurate statistical inference in such environments is a nontrivial task, particularly under stringent constraints on computational power, time, and inter-node communication. While statistical-computational trade-offs have been thoroughly characterized for high-dimensional inference in centralized settings, our understanding of these trade-offs within decentralized network environments remains comparatively limited. Indeed, methodologies that demonstrate robustness and accuracy in traditional low-dimensional contexts frequently underperform in high-dimensional regimes, and theoretical convergence results often fail to align with observed empirical behaviors. This divergence is largely attributable to the historical emphasis on optimization-centric design and analysis of decentralized algorithms, often overlooking critical statistical nuances. In this talk, we will introduce new algorithmic frameworks and analytical tools specifically tailored for decentralized high-dimensional inferential tasks. By integrating statistical insights into the design and analysis of decentralized optimization algorithms, we shed new light on existing gaps and misconceptions prevalent in the literature, thereby redefining our understanding of distributed inference methodologies.
Gesualdo Scutari is the Pedro and Granadillo Professor in the School of Industrial Engineering and Electrical and Computer Engineering at Purdue University, West Lafayette, IN, USA. His research interests focus on continuous optimization--particularly distributed and stochastic methods--equilibrium programming, and their applications in signal processing and statistical learning. Among others, he was a recipient of the 2013 NSF CAREER Award, the 2015 IEEE Signal Processing Society Young Author Best Paper Award, and the 2020 IEEE Signal Processing Society Best Paper Award. He served as an IEEE Signal Processing Distinguish Lecturer (2023-2024), and has been on the editorial broad of several IEEE journals. He is currently an Associate Editor for the SIAM Journal on Optimization. He is a Fellow of IEEE.
Integrated Sensing and Communications via Beamforming
Consider an integrated sensing and communication (ISAC) system where a base station seeks to minimize the Cramer-Rao bound of a parameter estimation problem while satisfying quality-of-service constraints for communication users via spatial beamforming. How many simultaneous beamformers should be used? How to design these beamformers? We answer the former question by investigating rank-reduction strategies for the semidefinite programming relaxation solution and show that the minimum number of sensing beamformers scales at most linearly in the number of parameters to be estimated. Furthermore, we propose an optimization framework involving a transformation of the Cramer-Rao bound minimization problem into an equivalent max-min formulation and an extension of the uplink and downlink duality result for the classical multiuser MIMO communications problem into the ISAC setting. This results in a considerably more efficient iterative procedure for solving the optimal beamforming problem for ISAC than the semidefinite relaxation approach, while ensuring convergence to the global optimal solution.
Wei Yu received the B.A.Sc. degree in computer engineering and mathematics from the University of Waterloo, Canada, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, U.S.A. He is a Professor and Canada Research Chair in Information Theory and Wireless Communications in the Electrical and Computer Engineering Department at the University of Toronto in Canada. Prof. Wei Yu is a Fellow of IEEE and a Fellow of the Canadian Academy of Engineering. He was the recipient of the IEEE Marconi Prize Paper Award in Wireless Communications in 2019, the IEEE Communications Society Award for Advances in Communication in 2019, the IEEE Signal Processing Society Best Paper Award in 2008, 2017, and 2021, the IEEE Communications Society and Information Theory Society Joint Paper Award in 2024, and the R. A. Fessenden Award from IEEE Canada in 2024. Prof. Wei Yu is a Clarivate Highly Cited Researcher. He served as the President of the IEEE Information Theory Society in 2021.