Korean AI Association

  >   학술행사   >   국내학술대회


류경석 교수 (서울대학교)
Title: Optimization for ML


Optimization plays a fundamental role in training neural networks in machine learning. In this tutorial, we will introduce the fundamentals of gradient- and stochastic gradient-based optimization algorithms. We will discuss the convergence analysis of SGD in the convex and non-convex setup; continuous-time analysis of SGD through ODEs and SDEs; and momentum- and non-momentum-based accelerations. 



Gene Golub Best Thesis Award, 2016 
Assistant Professor, Department of Mathematical Sciences, Seoul National University 
Assistant Adjunct Professor, Department of Mathematics, University of California, Los Angeles 
Ph.D. in Computational and Mathematical Engineering, Stanford University 




예종철 교수  (KAIST 김재철AI대학원)
Title: Diffusion Models


Recently, score-based models, and denoising diffusion probabilistic models (DDPMs) have gained wide interest as a new class of generative model that achieves surprisingly high sample quality without adversarial training. Among many works, Song et al. (2021b) generalized discrete score-matching procedures to a continuous stochastic differential equation (SDE), which in fact also subsumes diffusion models into the same framework. Score-based models perturb the data distribution according to the forward SDE by injecting Gaussian noise, arriving at a tractable distribution (e.g. isotropic Gaussian dis- tribution). In order to sample from the data distribution, one can train a neural network to estimate the gradient of the log data distribution (i.e. score function, ∇x log p(x)), and use it to solve the reverse SDE numerically. However, diffusion models are inherently slow to sample from, needing few thousand steps of iteration to generate images from pure Gaussian noise. In this tutorial, we review the basic mathematical principles of score-based diffusion models and its applications to various computer vision tasks, such as image synthesis, translation, inverse problems, etc. Then, recent research activities to accelerate the sampling process are reviewed.


Jong Chul Ye is a Professor of the Kim Jaechul Graduate School of Artificial Intelligence (AI) of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received the B.Sc. and M.Sc. degrees from Seoul National University, Korea, and the Ph.D. from Purdue University, West Lafayette. Before joining KAIST, he worked at Philips Research and GE Global Research in New York. He has served as an associate editor of IEEE Trans. on Image Processing, and an editorial board member for Magnetic Resonance in Medicine. He is currently an associate editor for IEEE Trans. on Medical Imaging, and a Senior Editor of IEEE Signal Processing Magazine. He is an IEEE Fellow, was the  Chair of IEEE SPS Computational Imaging TC,  and IEEE EMBS Distinguished Lecturer. He was a General Cochair (with Mathews Jacob) for IEEE Symp. On Biomedical Imaging (ISBI) 2020. His research interest is in machine learning for biomedical imaging and computer vision. 



안성수 교수 (포항공과대학)
Title: Graph Neural Network


Deep learning has produced significant breakthroughs in several fields such as computer vision, natural language processing, and speech recognition. Such research is based on neural networks designed for Euclidean-structured data such as images, text, or acoustic signals. In this tutorial, I will introduce geometric deep learning (GDL): a framework to generalize neural networks for non-Euclidean structured data such as graphs and manifolds. I will also introduce how GDL helps to solve drug discovery as a framework for handling molecular graph and protein 3D structures. In particular, I will briefly introduce using GDL for molecular property prediction, molecular design, retrosynthesis, and protein structure prediction (such as AlphaFold) 



Assistant Professor, Department of Computer Science and Engineering and Graduate School of Artificial Intelligence, POSTECH 
Research associate, MBZUAI 
Ph.D in Electrical Engineering, KAIST 





주재걸 교수  (KAIST 김재철AI대학원)
Title: Bias and Domain Adaptation in Computer Vision


이미지 분류 모델에서 편향 문제는 해당 클래스의 본연의 특질 (예: 새 클래스에 대해 새의 날개, 부리 등) 대신, 학습 데이터 내에서 높은 빈도로 함께 발생되는 부차적인 속성들 (예: 새가 날고 있는 파란 하늘 배경, 새가 앉아 있는 나무 등) 에 의존하여 이미지 분류를 수행하는 문제를 의미한다. 본 튜토리얼에서는 이러한 이미지 편향 문제를 해결하기 위한 다양한 방법론들을 소개하고, 향후 연구 방향들을 논의한다. 



2001: 학사, 서울대학교 전기공학부 
2009: 석사, Electrical and Computer Engineering, Georgia Tech 
2003: 박사, Computational Science and Engineering, Georgia Tech 
2020.03 - 현재: KAIST 인공지능대학원 부교수 
2019.09 – 2020.02: 고려대학교 인공지능학과 부교수 
2015.03 – 2019.08: 고려대학교 컴퓨터학과 조교수 
2011.12 – 2015.02: Research Scientist, Georgia Tech 
2020: IEEEE VIS'20 10-Year Test-of-Time Award 
2016: ICDM’16 Best Student Paper Award 
2015: 네이버 신진교수상