Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

Æ©Å丮¾ó
 
ÀÓµ¿¿µ ±³¼ö (UNIST)
 
Title: Stochastic and Multi-Objective Optimization in AI: Theory, Algorithms, and Applications
 
Abs:
This tutorial presents recent advances in stochastic optimization and multi-objective optimization for modern AI systems. The tutorial is structured in two parts. In the first part, we explore stochastic optimization problems involving AI through the lens of Stochastic Gradient Langevin Dynamics (SGLD). In particular, we discuss its theoretical foundations, examine its applications to multi-period, multi-asset portfolio optimization, and introduce new research directions.  In the second part, we present a novel optimization framework, Dual Cone Gradient Descent (DCGD). We discuss its theoretical properties and demonstrate its applicability to real-world problems such as Physics-Informed Neural Networks (PINNs) and machine unlearning.
 
Bio
Dong-Young Lim is an assistant professor in the Department of Industrial Engineering and the Artificial Intelligence Graduate School at UNIST. He received his B.S., M.S., and Ph.D. degrees in Industrial and Systems Engineering from KAIST, where his doctoral research focused on financial engineering and mathematical finance. He later joined the School of Mathematics at the University of Edinburgh as a Marie Sklodowska-Curie Fellow, conducting research on learning theory in AI. From June to August 2024, he was a visiting researcher in the Optimization Theory Group at the Alan Turing Institute in the UK. His research interests include stochastic analysis, stochastic differential equations (SDEs), and stochastic and multi-objective optimization, with applications in AI and operations research/management science.

 
 
¿À¹Îȯ ±³¼ö (¼­¿ï´ëÇб³)
 
Title:  Contextual Bandits with Combinatorial Actions & Preference Feedback
 
Abs
Interactive AI systems must often pick a set of options—an assortment, slate of contents, or prompt batch—and learn from the choices users make. This tutorial introduces the contextual bandit with combinatorial actions and preference feedback, a model that captures such settings and now underpins various interactive machine learning applications, including recommender systems and preference-based fine-tuning for large language models. We will trace core ideas—from optimism and Thompson sampling to a recent minimax algorithm for the contextual multinomial logistic bandit. The tutorial will emphasize principled guidelines that bridge theory and practice.
 
Bio
Min-hwan Oh is an Associate Professor in the Graduate School of Data Science at Seoul National University. His research focuses on sequential decision-making under uncertainty, reinforcement learning, bandit algorithms, optimization, statistical machine learning, and their various applications. He received his Ph.D. in Operations Research from Columbia University. His doctoral thesis was recognized as a finalist for the INFORMS George B. Dantzig Dissertation Award and for the INFORMS Applied Probability Society’s Best Student Paper Award. He is a recipient of the Amazon Research Award.
 

 
 
±è°æ¿ø ±³¼ö (¿¬¼¼´ëÇб³)
 
Title: From Classical to Recent Advances in Sufficient Dimension Reduction
 
Abs:
This talk provides a comprehensive overview of Sufficient Dimension Reduction (SDR), beginning with classical methods such as Sliced Inverse Regression (SIR) and Sliced Average Variance Estimation (SAVE). It expands to nonlinear extensions, including methods that leverage neural networks to approximate complex regression structures. The discussion further extends SDR principles into functional data contexts, addressing the challenges posed by infinite-dimensional data and introducing functional SIR and GSIR (Generalized SIR). Finally, the tutorial connects SDR methods to graphical models, presenting how dimension reduction can be integrated into graphical models.
 
Bio
Kyongwon Kim is an Assistant Professor in the Department of Applied Statistics and Department of Statistics and Data Science at Yonsei University. Prior to joining Yonsei University, he was an Assistant Professor in the Department of Statistics at Ewha Womans University and the Department of Mathematics and Statistics at Wake Forest University. He received his Ph.D. in Statistics from The Pennsylvania State University, advised by Professor Bing Li, and his B.S. in Mathematics from Sogang University. His research interests include sufficient dimension reduction, graphical models, functional data analysis, causal inference, machine learning, and deep learning.

 
À̽½±â ±³¼ö (UNIST)
 
Title: Advances in On-Device AI: Methods and Applications
 
Abs:
The rapid advancement and widespread adoption of AI—especially deep learning technologies—in resource-constrained environments have driven a growing demand for on-device AI across a wide range of platforms, including mobile devices, IoT nodes, robots, and autonomous vehicles. This tutorial presents a set of innovative strategies and applications, which makes on-device AI possible for intelligent tasks once considered infeasible, such as Sora-style text-to-video generation.
 
Bio
Seulki Lee is an assistant professor in the Department of Computer Science and Engineering (CSE) and the Artificial Intelligence Graduate School (AIGS) at Ulsan National Institute of Science & Technology (UNIST), where he leads the Embedded AI Lab. He earned his Ph.D. in Computer Science from the University of North Carolina at Chapel Hill (UNC Chapel Hill). His research focuses on making resource-constrained, real-time, and embedded sensing systems capable of learning, adapting, and evolving, advancing the field of Embedded AI. He has published in leading embedded systems and AI conferences, including OSDI, MobiSys, SenSys, UbiComp, RTAS, IPSN, PerCom, DCOSS, NeurIPS, AAAI, ICML, ICLR, KDD, AISTATS, and ACCV. His contributions have been recognized with multiple awards, including the Outstanding Position Paper Award (ICML 2025), Best Application Paper Award (ACCV 2024), Best Paper Runner-up Award (IPSN 2023), Best Paper Award (AIoTChallenge 2020), and Best Presentation Award (UbiComp 2020).
 
 
ÇãÀçÇÊ ±³¼ö (¼º±Õ°ü´ëÇб³)
 
Title: Text-to-Video Retrieval
 
Abs:
With the rapid rise of video-sharing platforms such as YouTube, massive amounts of content are produced and uploaded daily, expanding information retrieval beyond traditional web search to include video data. This tutorial presents techniques for retrieving videos via natural language queries: first, we introduce partially relevant video retrieval methods that identify videos containing one or more segments related to a text query; next, we discuss video moment retrieval, which localizes the precise temporal interval within a single video that best matches the query; and finally, we address key challenges, particularly scalability in large-scale video search, focusing on memory footprint and inference speed.
 
Bio
Jae-Pil Heo is an Associate Professor in the Department of Computer Science and Engineering at Sungkyunkwan University (SKKU). He earned his BS (2008), MS (2010) and PhD (2015) in Computer Science from KAIST, where he was supervised by Prof. Sung-Eui Yoon. Prior to joining SKKU, he conducted research at the Electronics and Telecommunications Research Institute (ETRI).

 
¹ÚÀºÇõ ±³¼ö (POSTECH)
 
Title: Image Generation Model Optimization & Acceleration
 
Abs:
The field of image generation has been advancing rapidly, driven by the remarkable progress of diffusion models and the emergence of autoregressive generative models. These models offer exceptional generation quality, support for multimodal inputs, and a wide range of applications. However, their widespread adoption is often hindered by significant computational costs.
In this tutorial, we will explore state-of-the-art acceleration techniques for those diffusion and autoregressive image generation models and provide hands-on guidance on how to effectively apply them in practice.
 
Bio
Eunhyeok Park is an Associate Professor at the Graduate School of Artificial Intelligence, POSTECH. He received his B.S. (2014) and M.S. (2015) in Electrical Engineering from POSTECH, and earned his Ph.D. (2020) in Computer Science from Seoul National University under the supervision of Prof. Seungjoo Yoo. He has conducted research at the Semiconductor Research Center at SNU and previously worked with the Mobile Vision team at Meta. Recently, his work has centered on efficient model optimization for large language models, diffusion models, and autoregressive generative models, along with physical AI acceleration for robotics applications.