Soo Min Kwon
I am a final year Ph.D. student in the Department of Electrical and Computer Engineering at the University of Michigan, Ann Arbor, advised by Prof. Laura Balzano and Prof. Qing Qu. Previously, I received my M.S. and B.S. degrees from Rutgers University, where I worked with Prof. Anand D. Sarwate.
I am interested in a wide range of problems, from theoretical deep learning to practical and efficient algorithms for generative models. I have worked on post-training algorithms for LLM reasoning tasks, theoretical analyses of in-context learning, and diffusion models for solving inverse problems.
news
| May 07, 2026 | Our paper, Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective, won the best paper award at the AISTATS 2026 Workshop for Causality in the Age of AI Scaling! |
|---|---|
| Apr 01, 2026 | My internship work at Google Research, CoDistill-GRPO: A Co-Distillation Recipe for Efficient Group Relative Policy Optimization, is now released! |
| Jan 22, 2026 | A paper, Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective, is accepted to AISTATS 2026! |
selected publications
- AISTATS 2026Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace PerspectiveInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2026
- ICLR 2025Learning Dynamics of Deep Matrix Factorization Beyond the Edge of StabilityThe Thirteenth International Conference on Learning Representations (ICLR), 2025
- NeurIPS 2024BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network InferenceThe Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
- AISTATS 2024Efficient Compression of Overparameterized Deep Models through Low-Dimensional Learning DynamicsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
- ICLR 2024Solving Inverse Problems with Latent Diffusion Models via Hard Data ConsistencyThe Twelfth International Conference on Learning Representations (ICLR Spotlight, Top 5%), 2024