Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× ÃÊ·Ï
 
¿©Áø¿µ ±³¼ö/ ¿¬¼¼´ëÇб³
 
Title: [NLP1] Tutorial: from text to knowledge 
 
 

Abs: For beginners in natural language processing, this talk covers the overall scope of natural language processing, from text classification based on RNN and LSTM to dialogue agents based on pre-trained language models. 

 

Bio: Jinyoung Yeo is an assistant professor of Artificial Intelligence at Yonsei University. Prior to joining Yonsei Univesity, he has been a research scientist at SK T-Brain, after his PhD from POSTECH. For more information, please visit http://convei.weebly.com 


 
Ȳ½Â¿ø ±³¼ö/ ¼­¿ï´ëÇб³ 
 
Title: [NLP2] Data Intelligence for Robust NLP
 
 

Abs: Training AI models can be viewed as interacting human intelligence with model intelligence. From this view, current norms of model training are constrained to crowdsourcing label annotations for a closed set. Such constrained interactions may explain why models solve datasets, instead of pursuing true learning goals. This talk discusses our recent work, showing how data intelligence research is relevant to enriching interactions between human and model intelligence, for robust training of NLP models that generalize well. More details can be found from http://seungwonh.github.io

 

Bio: Seung-won Hwang is a Professor of Computer Science and Engineering at Seoul National University. Prior to joining SNU, she has been a faculty at POSTECH and Yonsei University, after her PhD from UIUC. Her research interests concern the interaction between data and language intelligence. Her work has been published at top-tier AI, DB/DM, and IR/NLP venues, including ACL, AAAI, IJCAI, NAACL, SIGIR, SIGMOD, VLDB, and ICDE. Her contributions have been recognized by awards from WSDM and Microsoft Research. 


 

±èµ¿¿ì ±³¼ö/ POSTECH
 
Title: Generative model: From VAE to normalizing flows
 
 

Abs: Deep generative models have been raised as an essential tool in modern machine learning systems. In this talk, the basic concepts of deep generative models, including variational autoencoder (VAE) and normalizing flows, will be introduced. Also, recent advances in the generative model for irregular data such as graphs will be covered.

 

Bio: Dongwoo Kim is an assistant professor of AI graduate school at POSECH since 2019. Before joining POSTECH, he worked at the Australian National University as a Lecturer (assistant professor) and a postdoctoral researcher from 2015 to 2019. He recieved his Ph.D. from KAIST in 2015. His research interests include generative model, representation learning, and drug design.

 


 
À±¼¼¿µ ±³¼ö/ KAIST
 
Title: Optimization for ML
 
 

Abs: ´ëºÎºÐÀÇ ±â°èÇнÀ¹æ¹ýµéÀº ¸ñÀûÇÔ¼ö¸¦ ¼³Á¤ÇÏ°í ±× ¸ñÀûÇÔ¼ö¸¦ ÃÖÀûÈ­¸¦ ÇÏ´Â °úÁ¤À¸·Î ÁøÇàµÈ´Ù. º» °­ÀÇ¿¡¼­´Â ÃÖ±Ù ±â°èÇнÀ¾Ë°í¸®ÁòµéÀÌ »ç¿ëÇÏ´Â ÃÖÀûÈ­ ±â¹ýµéÀ» ÀÌÇØÇϱâ À§ÇÏ¿© convex ÇÔ¼ö ÃÖÀûÈ­¿¡¼­ºÎÅÍ ½ÃÀÛÇÏ¿© ±âÃÊÀûÀÎ ÃÖÀûÈ­ ÀÌ·ÐÀ» ¼³¸íÇÏ°í, ÃÖ±Ù ±â°èÇнÀ ¾Ë°í¸®Áò¿¡¼­ Áß¿äÇÏ°Ô ´Ù·ç´Â ±â¹ýµéÀ» ¼³¸íÇÑ´Ù. Gradient descent¿Í stochastic gradient descent ±â¹ÝÀÇ ADAM, RMSProp, µîÀÇ µö·¯´× ÃÖÀûÈ­ ¹æ¹ý¿¡ ´ëÇÑ ÀÌÇØ, federated learning µî¿¡¼­ È°¿ëÇÏ´Â ºÐ»ê ÃÖÀûÈ­ ±â¹ý, black-box optimization µî°ú °°ÀÌ gradient¸¦ »ç¿ëÇÏÁö ¾Ê´Â ÃÖÀûÈ­ ±â¹ýÀ» ¼Ò°³ÇÑ´Ù.

 

 

Bio

KAIST Graduate School of AI Associate Professor (2017. 7~)
Los Alamos Research Lab Post-doc Researcher (2016. 4 ~ 2017.7)
Microsoft(Cambridge) Visiting Researcher (2015. 6~ 2016. 3)
Microsoft-INRIA Post-doc Researcher (2014. 4~ 2015. 4)
Sweden KTH Post-doc Researcher (2013. 2~2014. 3)


 

¹ÚÁö¿ë ±³¼ö/ University of North Carolina Greensboro

 
Title: An Introduction to Causal Inference: Potential Outcome and Directed Acyclic Graph Approaches to Causality
 
 
Abs: Many questions in not only social science but also AI research are causal in nature: what would happen to individuals, organizations, or the society, if part of their environment were changed? —for example, the impact (or lack thereof) of increases in the minimum wage on employment, the effect of introducing a recommender system on product sales, or the effect of replacing the legacy system with AI-based smart systems on firm performance. Causal inference encompasses statistical and computational methods for studying such questions and determining what causes what. Across multiple disciplines, causal inference has been achieved largely building on two frameworks: (i) Potential Outcome Framework, also known as Rubin Causal Model (design-based approach), and (ii) Structural Causal Model (graph-based approach). Thus, this lecture on causal inference aims to introduce these related but distinct approaches toward causal inference, specifically concerning research that uses observational data to identify a causal effect. Moreover, the key differences between the two causal models are also discussed in pursuit of connecting the two worlds of causal inference.

 

Bio: Jiyong Park is an assistant professor of information systems at the Bryan School of Business and Economics, the University of North Carolina at Greensboro. He received his Ph.D. from KAIST in 2019. He has organized Korea Summer Session on Causal Inference since 2017 (https://sites.google.com/view/causal-inference2021). More information can be found at https://jiyong-park.github.io.


 
Ȳ¼ºÁÖ ±³¼ö/ KAIST
 
Title: Federated Learning
 
 

Abs: Federated Learning is a machine learning problem which aims to obtain a global model by aggregating the models or other knowledge from local clients that train on their private data, without compromising the data privacy. In this lecture, we will learn about the basic concept of federated learning and its challenges, as well as the most essential federated learning algorithms.

 

Bio

2020/01 - Present: Associate Professor, Graduate School of AI  and School of Computing, KAIST
2018/01 - 2019/12: Assistant Professor, School of Computing and Graduate School of AI, KAIST
2014/08 - 2017/12: Assistant Professor, School of Electronic and Computer Engineering, UNIST
2013/09 - 2014/08: Postdoctoral Research Associate, Disney Research 
2013/08: Ph.D. in Computer Science, University of Texas at Austin


 

¹é½Â·Ä ±³¼ö/ UNIST

 
Title: [CV1] Introduction to deep learning-based computer vision topics
 
 

Abs: For beginners in computer vision, this lecture covers the introduction of recent deep learning-based computer vision applications from image classification, object detection and semantic segmentation to its extension to 3D reconstruction and temporal action recognition tasks from RGBD data.

 

Bio: Seungryul Baek is an assistant professor of the Department of Computer Science and Engineering (CSE) and the Artificial Intelligence Graduate School (AIGS) at UNIST since April 2020. He obtained BS (2009) and MS (2011) degrees from Dept. of Electrical Engineering at KAIST and Ph.D. degree (2020) from Dept. of Electrical and Electronic Engineering at Imperial College London, UK. Before joining Ph.D., he was an employee at DMC Research Center of Samsung Electronics for four years (2011.2.-2015.2.).


 

±èÀº¼Ö ±³¼ö/ ÇѾç´ëÇб³

 
Title: [CV2] Large-scale image and video understanding with transformers
 
 

Abs: In this talk, self-supervised learning methods for large-scale image and video datas will be introduced. First of all, theoretical understanding of the contrast loss and its variants, which are the basis of the recent self-supervised learning, will be provided. Also, recent approaches leveraging Transformer architectures for the large-scale real-world image/video datasets are reviewed. Interesting applications (including image recognition, image generation, multimodal learning, visual question answering, video (action) recognition and video moment retrieval) with the transformer-based self-supervised learning methods will be covered.

 

Bio: Eun-Sol Kim is an Assistant Professor at the Department of Computer Science, Hanyang University. Before joining Hanyang University, she was a Research Scientist at Kakao Brain. She received BS and Ph.D degree from CSE at SNU.


 

À̺´ÁØ ±³¼ö/ °í·Á´ëÇб³

 
Title: [RL1] Reinforcement Learning Basics 
 
 

Abs

°­È­ÇнÀ(Reinforcement Learning)Àº µ¥ÀÌÅÍ ±â¹ÝÀ¸·Î ¼øÂ÷Àû ÀÇ»ç°áÁ¤ ¹®Á¦¸¦ ´Ù·ç´Â ¼öÇÐÀû ÇÁ·¹ÀÓ¿öÅ©ÀÔ´Ï´Ù. ÃÖ±Ù ¸î ³â°£ Àΰø½Å°æ¸ÁÀÇ °­·ÂÇÑ Ç¥Çö·Â°ú °áÇÕµÈ µö °­È­ÇнÀ ¾Ë°í¸®ÁòµéÀº ´Ù¾çÇÑ µµÀüÀûÀÎ ¹®Á¦µéÀ» ÇØ°áÇÒ ¼ö ÀÖ´Â ´É·ÂÀÌ ÀÖÀ½À» Áõ¸íÇØ ¿Ô½À´Ï´Ù. º» °­¿¬¿¡¼­´Â °­È­ÇнÀÀÇ °³°ýÀûÀÎ ³»¿ëÀ¸·Î½á ¸¶ÄÚÇÁ ÀÇ»ç°áÁ¤°úÁ¤(MDP)·Î½áÀÇ ¹®Á¦ Á¤ÀÇ, Q-learning ¾Ë°í¸®Áò¿¡¼­ DQN ¾Ë°í¸®Áò¿¡ À̸£±â±îÁöÀÇ ³»¿ëµéÀ» ´Ù·ç°íÀÚ ÇÕ´Ï´Ù.

 

Bio: Byung-Jun Lee is currently an assistant professor in the Department of Artificial Intelligence at Korea University. He is also a part-time applied scientist in Gauss Labs Inc. He obtained a Ph.D. degree in Computer Science from KAIST in 2021 with the outstanding thesis award. He is mainly interested in designing efficient offline reinforcement learning algorithms with applications to natural language processing. 

 


 

½ÅÁø¿ì ±³¼ö/ KAIST

 
Title: [RL2] Policy gradients in deep reinforcement learning
 
 

Abs:

 

Policy gradient methods are a type of reinforcement learning (RL) techniques that rely upon optimizing parametrized policies with respect to the long-term cumulative reward by gradient descent. Compared to value-based methods in RL, it has shown superior performance for tasks having continuous states & actions. In this lecture, I will introduce the fundamentals of the methods. First, I will explain how the policy objective can be a particularly difficult optimization problem. Then, I will present existing tools to relax the challenges: (a) standard tools from optimization and (b) tools leveraging more knowledge about the objective. Recent policy gradient schemes including TRPO, PPO, DDPG and SAC will be covered.
 

BioJinwoo Shin is currently an associate professor (jointly affiliated) in Kim Jaechul Graduate School of AI. He is also a KAIST endowed chair professor. He obtained the Ph.D. degree (in Math) from Massachusetts Institute of Technology in 2010 with George M. Sprowls Award (for best MIT CS PhD theses). He was a postdoctoral researcher at Georgia Institute of Technology in 2010-2012 and IBM T. J. Watson Research in 2012-2013. Dr. Shin's early works are mostly on applied probability and theoretical computer science. After he joined KAIST in Fall 2013, he started to work on the algorithmic foundations of machine learning, and he is now one of most prolific AI researchers, publishing more than 50 papers at top AI conferences in the last three years.


 

À±¼ºÈ¯ ±³¼ö/ UNIST

 
Title: Meta-learnig tutorial: From few-shot classification to real-world applications
 
 

Abs: In this talk, the basic concept of meta-learning and the principles of few-shot learning will be introduced. Also, related recent research topics of the meta-learning framework including few-shot continual learning, semantic segmentation, and meta-learning-based federated learning will be discussed.

 

Bio: Sung Whan Yoon is an assistant professor of AI graduate school at UNIST since 2020. Before joining UNIST, he received his Ph. D. degree from KAIST in 2017. From 2017 to 2020, he worked at KAIST as a postdoctoral researcher. His research interests include meta-learning, continual learning, federated learning, and intelligent communication systems.  


 

¹®Å¼· ±³¼ö/ ¼­¿ï´ëÇб³

 
Title:  Recent Research Trend in Continual Learning 
 
 

Abs: º» °­ÀÇ¿¡¼­´Â Continual Learning (¿¬¼ÓÇнÀ)ÀÇ ÃֽŠ¿¬±¸µ¿Çâ°ú ÇâÈÄ ¹æÇâ¿¡ ´ëÇØ »ìÆ캸µµ·Ï ÇÑ´Ù. ¸ÕÀú continual learning ¹®Á¦ÀÇ ¼¼ ºÐ·ù (domain/task/class incremental learning) ¿¡ ´ëÇØ »ìÆ캸°í, Å©°Ô ¼¼ °¡ÁöÀÇ ¿¬±¸ ¹æÇâ (regularization/parameter isolation/exemplar memory - based method)ÀÇ °á°úµéÀ» »ìÆ캻´Ù. ¶ÇÇÑ, °¢ ¹æ¹ýµéÀÇ ÇÑ°è¿¡ ´ëÇؼ­µµ »ìÆ캸°í, °£´ÜÇÑ ºÐ·ù(classification) ÀÌ¿ÜÀÇ ¹®Á¦¿¡¼­ÀÇ continual learning ¿¬±¸ °á°ú¿¡ ´ëÇؼ­µµ »ìÆ캸°í, ÇâÈÄ ¿¬±¸ ¹æÇâ¿¡ ´ëÇؼ­µµ Á¶¸ÁÇغ»´Ù. 

 

Bio: TAESUP MOON received the B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 2002, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 2004 and 2008, respectively. From 2008 to 2012, he was a Scientist with Yahoo! Labs, Sunnyvale, CA, USA. He was a Postdoctoral Researcher with the Department of Statistics, UC Berkeley, from 2012 to 2013. From 2013 to 2015, he was a Research Staff Member with the Samsung Advanced Institute of Technology (SAIT), from 2015 to 2017, he was an Assistant Professor with the Department of Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), and from 2017 to 2021, he was an Assistant/Associate Professor with the Department of Electrical and Computer Engineering, Sungkyunkwan University (SKKU). He is currently an Associate Professor with the Department of Electrical and Computer Engineering, Seoul National University (SNU). His current research interests include machine/deep learning, signal processing, information theory, and various (big) data science applications.