Korean AI Association

2022 인공지능 동계 단기강좌

  >   학술행사   >   국내학술대회


연사 및 초록
여진영 교수/ 연세대학교
Title: [NLP1] Tutorial: from text to knowledge 

Abs: For beginners in natural language processing, this talk covers the overall scope of natural language processing, from text classification based on RNN and LSTM to dialogue agents based on pre-trained language models. 


Bio: Jinyoung Yeo is an assistant professor of Artificial Intelligence at Yonsei University. Prior to joining Yonsei Univesity, he has been a research scientist at SK T-Brain, after his PhD from POSTECH. For more information, please visit http://convei.weebly.com 

황승원 교수/ 서울대학교 
Title: [NLP2] Data Intelligence for Robust NLP

Abs: Training AI models can be viewed as interacting human intelligence with model intelligence. From this view, current norms of model training are constrained to crowdsourcing label annotations for a closed set. Such constrained interactions may explain why models solve datasets, instead of pursuing true learning goals. This talk discusses our recent work, showing how data intelligence research is relevant to enriching interactions between human and model intelligence, for robust training of NLP models that generalize well. More details can be found from http://seungwonh.github.io


Bio: Seung-won Hwang is a Professor of Computer Science and Engineering at Seoul National University. Prior to joining SNU, she has been a faculty at POSTECH and Yonsei University, after her PhD from UIUC. Her research interests concern the interaction between data and language intelligence. Her work has been published at top-tier AI, DB/DM, and IR/NLP venues, including ACL, AAAI, IJCAI, NAACL, SIGIR, SIGMOD, VLDB, and ICDE. Her contributions have been recognized by awards from WSDM and Microsoft Research. 


김동우 교수/ POSTECH
Title: Generative model: From VAE to normalizing flows

Abs: Deep generative models have been raised as an essential tool in modern machine learning systems. In this talk, the basic concepts of deep generative models, including variational autoencoder (VAE) and normalizing flows, will be introduced. Also, recent advances in the generative model for irregular data such as graphs will be covered.


Bio: Dongwoo Kim is an assistant professor of AI graduate school at POSECH since 2019. Before joining POSTECH, he worked at the Australian National University as a Lecturer (assistant professor) and a postdoctoral researcher from 2015 to 2019. He recieved his Ph.D. from KAIST in 2015. His research interests include generative model, representation learning, and drug design.


윤세영 교수/ KAIST
Title: Optimization for ML

Abs: 대부분의 기계학습방법들은 목적함수를 설정하고 그 목적함수를 최적화를 하는 과정으로 진행된다. 본 강의에서는 최근 기계학습알고리즘들이 사용하는 최적화 기법들을 이해하기 위하여 convex 함수 최적화에서부터 시작하여 기초적인 최적화 이론을 설명하고, 최근 기계학습 알고리즘에서 중요하게 다루는 기법들을 설명한다. Gradient descent와 stochastic gradient descent 기반의 ADAM, RMSProp, 등의 딥러닝 최적화 방법에 대한 이해, federated learning 등에서 활용하는 분산 최적화 기법, black-box optimization 등과 같이 gradient를 사용하지 않는 최적화 기법을 소개한다.




KAIST Graduate School of AI Associate Professor (2017. 7~)
Los Alamos Research Lab Post-doc Researcher (2016. 4 ~ 2017.7)
Microsoft(Cambridge) Visiting Researcher (2015. 6~ 2016. 3)
Microsoft-INRIA Post-doc Researcher (2014. 4~ 2015. 4)
Sweden KTH Post-doc Researcher (2013. 2~2014. 3)


박지용 교수/ University of North Carolina Greensboro

Title: An Introduction to Causal Inference: Potential Outcome and Directed Acyclic Graph Approaches to Causality
Abs: Many questions in not only social science but also AI research are causal in nature: what would happen to individuals, organizations, or the society, if part of their environment were changed? —for example, the impact (or lack thereof) of increases in the minimum wage on employment, the effect of introducing a recommender system on product sales, or the effect of replacing the legacy system with AI-based smart systems on firm performance. Causal inference encompasses statistical and computational methods for studying such questions and determining what causes what. Across multiple disciplines, causal inference has been achieved largely building on two frameworks: (i) Potential Outcome Framework, also known as Rubin Causal Model (design-based approach), and (ii) Structural Causal Model (graph-based approach). Thus, this lecture on causal inference aims to introduce these related but distinct approaches toward causal inference, specifically concerning research that uses observational data to identify a causal effect. Moreover, the key differences between the two causal models are also discussed in pursuit of connecting the two worlds of causal inference.


Bio: Jiyong Park is an assistant professor of information systems at the Bryan School of Business and Economics, the University of North Carolina at Greensboro. He received his Ph.D. from KAIST in 2019. He has organized Korea Summer Session on Causal Inference since 2017 (https://sites.google.com/view/causal-inference2021). More information can be found at https://jiyong-park.github.io.

황성주 교수/ KAIST
Title: Federated Learning

Abs: Federated Learning is a machine learning problem which aims to obtain a global model by aggregating the models or other knowledge from local clients that train on their private data, without compromising the data privacy. In this lecture, we will learn about the basic concept of federated learning and its challenges, as well as the most essential federated learning algorithms.



2020/01 - Present: Associate Professor, Graduate School of AI  and School of Computing, KAIST
2018/01 - 2019/12: Assistant Professor, School of Computing and Graduate School of AI, KAIST
2014/08 - 2017/12: Assistant Professor, School of Electronic and Computer Engineering, UNIST
2013/09 - 2014/08: Postdoctoral Research Associate, Disney Research 
2013/08: Ph.D. in Computer Science, University of Texas at Austin


백승렬 교수/ UNIST

Title: [CV1] Introduction to deep learning-based computer vision topics

Abs: For beginners in computer vision, this lecture covers the introduction of recent deep learning-based computer vision applications from image classification, object detection and semantic segmentation to its extension to 3D reconstruction and temporal action recognition tasks from RGBD data.


Bio: Seungryul Baek is an assistant professor of the Department of Computer Science and Engineering (CSE) and the Artificial Intelligence Graduate School (AIGS) at UNIST since April 2020. He obtained BS (2009) and MS (2011) degrees from Dept. of Electrical Engineering at KAIST and Ph.D. degree (2020) from Dept. of Electrical and Electronic Engineering at Imperial College London, UK. Before joining Ph.D., he was an employee at DMC Research Center of Samsung Electronics for four years (2011.2.-2015.2.).


김은솔 교수/ 한양대학교

Title: [CV2] Large-scale image and video understanding with transformers

Abs: In this talk, self-supervised learning methods for large-scale image and video datas will be introduced. First of all, theoretical understanding of the contrast loss and its variants, which are the basis of the recent self-supervised learning, will be provided. Also, recent approaches leveraging Transformer architectures for the large-scale real-world image/video datasets are reviewed. Interesting applications (including image recognition, image generation, multimodal learning, visual question answering, video (action) recognition and video moment retrieval) with the transformer-based self-supervised learning methods will be covered.


Bio: Eun-Sol Kim is an Assistant Professor at the Department of Computer Science, Hanyang University. Before joining Hanyang University, she was a Research Scientist at Kakao Brain. She received BS and Ph.D degree from CSE at SNU.


이병준 교수/ 고려대학교

Title: [RL1] Reinforcement Learning Basics 


강화학습(Reinforcement Learning)은 데이터 기반으로 순차적 의사결정 문제를 다루는 수학적 프레임워크입니다. 최근 몇 년간 인공신경망의 강력한 표현력과 결합된 딥 강화학습 알고리즘들은 다양한 도전적인 문제들을 해결할 수 있는 능력이 있음을 증명해 왔습니다. 본 강연에서는 강화학습의 개괄적인 내용으로써 마코프 의사결정과정(MDP)로써의 문제 정의, Q-learning 알고리즘에서 DQN 알고리즘에 이르기까지의 내용들을 다루고자 합니다.


Bio: Byung-Jun Lee is currently an assistant professor in the Department of Artificial Intelligence at Korea University. He is also a part-time applied scientist in Gauss Labs Inc. He obtained a Ph.D. degree in Computer Science from KAIST in 2021 with the outstanding thesis award. He is mainly interested in designing efficient offline reinforcement learning algorithms with applications to natural language processing. 



신진우 교수/ KAIST

Title: [RL2] Policy gradients in deep reinforcement learning



Policy gradient methods are a type of reinforcement learning (RL) techniques that rely upon optimizing parametrized policies with respect to the long-term cumulative reward by gradient descent. Compared to value-based methods in RL, it has shown superior performance for tasks having continuous states & actions. In this lecture, I will introduce the fundamentals of the methods. First, I will explain how the policy objective can be a particularly difficult optimization problem. Then, I will present existing tools to relax the challenges: (a) standard tools from optimization and (b) tools leveraging more knowledge about the objective. Recent policy gradient schemes including TRPO, PPO, DDPG and SAC will be covered.

BioJinwoo Shin is currently an associate professor (jointly affiliated) in Kim Jaechul Graduate School of AI. He is also a KAIST endowed chair professor. He obtained the Ph.D. degree (in Math) from Massachusetts Institute of Technology in 2010 with George M. Sprowls Award (for best MIT CS PhD theses). He was a postdoctoral researcher at Georgia Institute of Technology in 2010-2012 and IBM T. J. Watson Research in 2012-2013. Dr. Shin's early works are mostly on applied probability and theoretical computer science. After he joined KAIST in Fall 2013, he started to work on the algorithmic foundations of machine learning, and he is now one of most prolific AI researchers, publishing more than 50 papers at top AI conferences in the last three years.


윤성환 교수/ UNIST

Title: Meta-learnig tutorial: From few-shot classification to real-world applications

Abs: In this talk, the basic concept of meta-learning and the principles of few-shot learning will be introduced. Also, related recent research topics of the meta-learning framework including few-shot continual learning, semantic segmentation, and meta-learning-based federated learning will be discussed.


Bio: Sung Whan Yoon is an assistant professor of AI graduate school at UNIST since 2020. Before joining UNIST, he received his Ph. D. degree from KAIST in 2017. From 2017 to 2020, he worked at KAIST as a postdoctoral researcher. His research interests include meta-learning, continual learning, federated learning, and intelligent communication systems.  


문태섭 교수/ 서울대학교

Title:  Recent Research Trend in Continual Learning 

Abs: 본 강의에서는 Continual Learning (연속학습)의 최신 연구동향과 향후 방향에 대해 살펴보도록 한다. 먼저 continual learning 문제의 세 분류 (domain/task/class incremental learning) 에 대해 살펴보고, 크게 세 가지의 연구 방향 (regularization/parameter isolation/exemplar memory - based method)의 결과들을 살펴본다. 또한, 각 방법들의 한계에 대해서도 살펴보고, 간단한 분류(classification) 이외의 문제에서의 continual learning 연구 결과에 대해서도 살펴보고, 향후 연구 방향에 대해서도 조망해본다. 


Bio: TAESUP MOON received the B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 2002, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 2004 and 2008, respectively. From 2008 to 2012, he was a Scientist with Yahoo! Labs, Sunnyvale, CA, USA. He was a Postdoctoral Researcher with the Department of Statistics, UC Berkeley, from 2012 to 2013. From 2013 to 2015, he was a Research Staff Member with the Samsung Advanced Institute of Technology (SAIT), from 2015 to 2017, he was an Assistant Professor with the Department of Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), and from 2017 to 2021, he was an Assistant/Associate Professor with the Department of Electrical and Computer Engineering, Sungkyunkwan University (SKKU). He is currently an Associate Professor with the Department of Electrical and Computer Engineering, Seoul National University (SNU). His current research interests include machine/deep learning, signal processing, information theory, and various (big) data science applications.