Korean AI Association

  >   학술행사   >   국내학술대회


연사 및 초록 (19일)
최종현 교수 (연세대학교)
Title: Introduction to Continual Learning


Continual learning, especially class-incremental learning uses an episodic memory for past knowledge for better performance. Updating a model with the episodic memory is similar to (1) updating a model with past knowledge in the memory by a few-shot learning scheme, and (2) learning an imbalanced distribution of past data and the present data. We address the unrealistic factors in popular continual learning setups and propose a few ideas to make the continual learning research in realistic scenarios.



 Jonghyun received the B.S. and M.S. degrees in electrical engineering and computer science from Seoul National University, Seoul, South Korea in 2003 and 2008, respectively. He received a Ph.D. degree from University of Maryland, College Park in 2015, under the supervision of Prof. Larry S. Davis. He is currently an associate professor at Yonsei University, Seoul, South Korea. During his PhD, he has worked as a research intern in US Army Research Lab (2012), Adobe Research (2013), Disney Research Pittsburgh (2014) and Microsoft Research Redmond (2014). He was a senior researcher at Comcast Applied AI Research, Washington, DC from 2015 to 2016. He was a research scientist at Allen Institute for Artificial Intelligence (AI2), Seattle, WA from 2016 to 2018 and is currently an affiliated research scientist. He was an assistant professor at GIST, South Korea. His research interest includes visual recognition using weakly supervised data for semantic understanding of images and videos and visual understanding for edge devices and household robots.




서민준 교수  (한국과학기술원)
Title: Introduction to Large Language Models


In this tutorial, I will go over the history of large language models. I will start with how deep learning came into NLP community (2014-2016), the motivation behind the development of Transformer (2017), how pretrained language models have become the de facto standard (2018-2019), and how large language models have begun the new paradigm of AI (2020-present). This tutorial is designed for researchers who are relatively new (1-2 years) or considering starting a career in this area.


Minjoon Seo is an Assistant Professor at KAIST (Korea Advanced Institute of Science & Technology) Kim Jaechul Graduate School of AI where he is the Director of Language & Knowledge Lab. He did PhD in Computer Science at the University of Washington and BS in Electrical Engineering & Computer Science at the University of California, Berkeley. His research interest is in natural language processing and machine learning, and in particular, how the world knowledge data can be encoded (e.g. semi-parametric language models), accessed (e.g. information retrieval and question answering), and produced (e.g. scientific reasoning). He is the recipient of Facebook Fellowship and AI2 Key Scientific Challenges Award. He previously co-organized MRQA 2018, MRQA 2019, MRQA 2021 and RepL4NLP 2020. 



정희철 교수 (경북대학교)
Title: Recent Advances in Deep Learning Architectures for Computer Vision


Starting with AlexNet in 2012, convolutional neural network (CNN)-based deep learning architectures for image recognition such as VGG, GoogleNet, and ResNet have been proposed. These architectures serve as backbone models for solving various computer vision problems such as classification, object detection, and segmentation. Recently, a transformer-based model mainly used in natural language processing is being applied for computer vision. Unlike CNN, these transformer-based models are based on the attention mechanism, and have recently achieved good performance in various tasks. In this lecture, we will look at the evolution of these deep learning architectures and also cover the recently proposed transformer-based models. 



- 2022. 1 ~ , CTO, Captos Co. Ltd. (Startup Company)
- 2019. 9 ~ , Assistant Professor, Department of Artificial Intelligence, Kyungpook National University
- 2019. 2 ~ 2019. 8, Senior Researcher, AIR Lab, Hyundai Motor Company
- 2018. 8, PhD, School of Electrical Engineering, KAIST
- 2017. 12 ~ 2019. 1 Senior Researcher, Hutom (Startup Company)  






주재걸 교수 (한국과학기술원)
Title: Diffusion Models and Their Applications in Generative Tasks


Recently, diffusion-based generative models have shown impressive performances in the synthesized image quality. in image generation and translation. In particular, Text-to-Image Translation, which synthesizes high-quality images reflecting the semantic meanings of an input text. Diffusion models play a major role in making such significant progress. In this talk, I will present how diffusion models work in detail and discuss the future research directions.  



Jaegul is currently an associate professor in the Graduate School of Artificial Intelligence at KAIST. He has been an assistant professor in the Dept. of Computer Science and Engineering at Korea University from 2015 to 2019 and then an associate professor in the Dept. of Artificial Intelligence at Korea University in 2019. He received M.S in the School of Electrical and Computer Engineering at Georgia Tech in 2009 and Ph.D in the School of Computational Science and Engineering at Georgia Tech in 2013, advised by Prof. Haesun Park. From 2011 to 2014, he has been a research scientist at Georgia Tech. He earned his B.S in the Dept. of Electrical and Computer Engineering at Seoul National University