Korean AI Association
2024 인공지능 동계 단기강좌
연사 및 초록 (14일)
연사 및 초록 (15일)
연사 및 초록 (16일)
> 학술행사 >
연사 및 초록 (14일)
Title: Deep Learning 101:basics and recent trend
Deep learning has been evolving rapidly, demonstrating significant success in projects like AlphaGo and ChatGPT. This lecture covers the fundamental structure and principles of deep learning. It introduces the basic components of deep learning and discusses the development of deep learning architectures, including RNN, CNN, and Transformer, in various application scenarios. Additionally, the lecture delves into the principles of deep learning training based on Backpropagation.
Se-Young Yun is an Associate Professor at the Kim Jaechul Graduate School of AI at KAIST and leads AI Weather Forecasting Research Center. Prior to joining KAIST in 2017, he worked as a postdoctoral researcher at KTH in Sweden, INRIA in France, and Los Alamos National Lab in the USA. He also served as a visiting researcher at Microsoft Research Cambridge in the UK. He obtained his Bachelor's and Doctoral degrees in Electrical Engineering from KAIST in 2006 and 2012, respectively.
Title: An Introduction to Mathematical Theory of Deep Learning
Over the last decade, deep learning has redefined the state-of-the-art in many application areas of machine learning and artificial intelligence. However, the success of deep learning has left many theoretical questions that still remain elusive today. This tutorial aims to provide a crash course on foundations and recent research results in the mathematical theory of deep learning. The following three central questions are covered. 1) Approximation: What functions can deep neural networks represent and approximate? 2) Optimization: Why can gradient-based optimization methods train deep neural networks to global optimality? 3) Generalization: Why can deep neural networks interpolate all training data points, and generalize to unseen data at the same time?
Chulhee “Charlie” Yun is an assistant professor at KAIST Kim Jaechul Graduate School of AI, where he directs the Optimization & Machine Learning Laboratory. He finished his PhD from the Laboratory for Information and Decision Systems and the Department of Electrical Engineering & Computer Science at MIT, under the joint supervision of Prof. Suvrit Sra and Prof. Ali Jadbabaie. Charlie’s research spans optimization, machine learning theory, and deep learning theory, with the driving goal of bridging the gap between theory and practice in modern machine learning.
Title: Deep Learning Approaches from Program Language to Natural Language
In recent years, there have been endeavors in the field of programming languages to address various challenges with the assistance of artificial intelligence. To this end, diverse pretrained language models, trained on programming languages and natural languages, have been developed. Leveraging these models, problems such as code summarization, code understanding, code generation, and code question answering are being tackled. In this seminar, we examine these approaches in various PLM for program language and provide an overview of the current state-of-the-art techniques for resolving various code-related tasks.
JEE-HYONG LEE earned his Ph.D. in computer science from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, Korea. In 2002, he joined Sungkyunkwan University in Suwon, South Korea, as a faculty member. Currently, he is the director at the Sungkyunkwan Convergence Institute of Information and Intelligence. He takes charge of the Graduate School Program of Artificial Intelligence, supported by the Korean government at Sungkyunkwan University. His research focuses on diverse aspects of training and adaptation of deep learning models and language models.
Title: Self-supervised Multimodal Learning
근래 초거대 언어 모델들이 인공지능 혁명을 일으키고 있고, 이들은 언어 뿐만 아니라 영상, 음성 등 여러 형식의 데이터를 통합적으로 이해하는 다형식 (Multimodal) 모델로 진화하고 있다. 이와 같은 모델들은 대용량의 데이터로 부터 자기 지도 학습으로 학습되는 경우가 많다. 본 강좌에서는 초거대 다형식 모델에 적용되는 자기 지도 학습 방법의 기초에 대해서 살펴보고, 주목할만한 최신 모델들에 대해 소개한다.
2015 - 현재: 서울대학교 공과대학 컴퓨터공학과 부교수
2018 - 현재: (주)리플에이아이 대표
2013 - 2015: 박사후 연구원, Disney Research
2009 - 2013: Carnegie Mellon University 전산학 박사
Conformal Prediction for Trustworthy AI
(AI) systems become deployed into practical and safety-critical systems that interact with humans due
to their great achievements in performance. However, careless deployment raises concerns due to untrusting behaviors of AI systems. In this talk, I’ll broadly explore the current trust problems of AI, including the hallucination and copyright issues, and their known mitigations for vision or language applications. Then, I’ll dive into one foundational tool, called conformal prediction, which is a solid basis toward trustworthy AI.
Sangdon Park is an assistant professor at POSTECH CSE/GSAI. Previously, he was a postdoctoral researcher at the Georgia Institute of Technology, mentored by Taesoo Kim. He earned his Ph.D. in Computer and Information Science from the University of Pennsylvania in 2021, where he was advised by Insup Lee and Osbert Bastani. His research interest focuses on designing trustworthy AI systems by understanding from theory to implementation and by considering practical applications in computer security, computer vision, robotics, cyber-physical systems, and natural language processing.