Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× ÃÊ·Ï (14ÀÏ)
 
À±¼¼¿µ ±³¼ö(KAIST)
 
Title: Deep Learning 101:basics and recent trend
 
Abs
Deep learning has been evolving rapidly, demonstrating significant success in projects like AlphaGo and ChatGPT. This lecture covers the fundamental structure and principles of deep learning. It introduces the basic components of deep learning and discusses the development of deep learning architectures, including RNN, CNN, and Transformer, in various application scenarios. Additionally, the lecture delves into the principles of deep learning training based on Backpropagation.
 
Bio
Se-Young Yun is an Associate Professor at the Kim Jaechul Graduate School of AI at KAIST and leads AI Weather Forecasting Research Center. Prior to joining KAIST in 2017, he worked as a postdoctoral researcher at KTH in Sweden, INRIA in France, and Los Alamos National Lab in the USA. He also served as a visiting researcher at Microsoft Research Cambridge in the UK. He obtained his Bachelor's and Doctoral degrees in Electrical Engineering from KAIST in 2006 and 2012, respectively.
 

 
 
À±Ã¶Èñ ±³¼ö(KAIST)
 
Title: An Introduction to Mathematical Theory of Deep Learning
 
Abs
Over the last decade, deep learning has redefined the state-of-the-art in many application areas of machine learning and artificial intelligence. However, the success of deep learning has left many theoretical questions that still remain elusive today. This tutorial aims to provide a crash course on foundations and recent research results in the mathematical theory of deep learning. The following three central questions are covered. 1) Approximation: What functions can deep neural networks represent and approximate? 2) Optimization: Why can gradient-based optimization methods train deep neural networks to global optimality? 3) Generalization: Why can deep neural networks interpolate all training data points, and generalize to unseen data at the same time?
 
Bio
Chulhee “Charlie” Yun is an assistant professor at KAIST Kim Jaechul Graduate School of AI, where he directs the Optimization & Machine Learning Laboratory. He finished his PhD from the Laboratory for Information and Decision Systems and the Department of Electrical Engineering & Computer Science at MIT, under the joint supervision of Prof. Suvrit Sra and Prof. Ali Jadbabaie. Charlie’s research spans optimization, machine learning theory, and deep learning theory, with the driving goal of bridging the gap between theory and practice in modern machine learning.
 

 
 
ÀÌÁöÇü ±³¼ö(¼º±Õ°ü´ëÇб³)
 
Title: Deep Learning Approaches from Program Language to Natural Language
 
Abs
In recent years, there have been endeavors in the field of programming languages to address various challenges with the assistance of artificial intelligence. To this end, diverse pretrained language models, trained on programming languages and natural languages, have been developed. Leveraging these models, problems such as code summarization, code understanding, code generation, and code question answering are being tackled. In this seminar, we examine these approaches in various PLM for program language and provide an overview of the current state-of-the-art techniques for resolving various code-related tasks.
 
Bio
JEE-HYONG LEE earned his Ph.D. in computer science from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, Korea. In 2002, he joined Sungkyunkwan University in Suwon, South Korea, as a faculty member. Currently, he is the director at the Sungkyunkwan Convergence Institute of Information and Intelligence. He takes charge of the Graduate School Program of Artificial Intelligence, supported by the Korean government at Sungkyunkwan University. His research focuses on diverse aspects of training and adaptation of deep learning models and language models.
 

 
 
 
±è°ÇÈñ ±³¼ö(¼­¿ï´ëÇб³)
 
Title: Self-supervised Multimodal Learning
 
Abs
±Ù·¡ ÃÊ°Å´ë ¾ð¾î ¸ðµ¨µéÀÌ ÀΰøÁö´É Çõ¸íÀ» ÀÏÀ¸Å°°í ÀÖ°í, À̵éÀº ¾ð¾î »Ó¸¸ ¾Æ´Ï¶ó ¿µ»ó, À½¼º µî ¿©·¯ Çü½ÄÀÇ µ¥ÀÌÅ͸¦ ÅëÇÕÀûÀ¸·Î ÀÌÇØÇÏ´Â ´ÙÇü½Ä (Multimodal) ¸ðµ¨·Î ÁøÈ­ÇÏ°í ÀÖ´Ù. ÀÌ¿Í °°Àº ¸ðµ¨µéÀº ´ë¿ë·®ÀÇ µ¥ÀÌÅÍ·Î ºÎÅÍ ÀÚ±â Áöµµ ÇнÀÀ¸·Î ÇнÀµÇ´Â °æ¿ì°¡ ¸¹´Ù. º» °­Á¿¡¼­´Â ÃÊ°Å´ë ´ÙÇü½Ä ¸ðµ¨¿¡ Àû¿ëµÇ´Â ÀÚ±â Áöµµ ÇнÀ ¹æ¹ýÀÇ ±âÃÊ¿¡ ´ëÇؼ­ »ìÆ캸°í, ÁÖ¸ñÇÒ¸¸ÇÑ ÃֽŠ¸ðµ¨µé¿¡ ´ëÇØ ¼Ò°³ÇÑ´Ù.
 
Bio
2015 - ÇöÀç: ¼­¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú ºÎ±³¼ö 
2018 - ÇöÀç: (ÁÖ)¸®Çÿ¡À̾ÆÀÌ ´ëÇ¥
2013 - 2015: ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research 
2009 - 2013: Carnegie Mellon University Àü»êÇÐ ¹Ú»ç
 

 
 
¹Ú»óµ· ±³¼ö(Æ÷Ç×°ø°ú´ëÇÐ)
 
Title: Conformal Prediction for Trustworthy AI
 
Abs
Artificial Intelligence (AI) systems become deployed into practical and safety-critical systems that interact with humans due to their great achievements in performance. However, careless deployment raises concerns due to untrusting behaviors of AI systems. In this talk, I’ll broadly explore the current trust problems of AI, including the hallucination and copyright issues, and their known mitigations for vision or language applications. Then, I’ll dive into one foundational tool, called conformal prediction, which is a solid basis toward trustworthy AI.
 
Bio
Sangdon Park is an assistant professor at POSTECH CSE/GSAI. Previously, he was a postdoctoral researcher at the Georgia Institute of Technology, mentored by Taesoo Kim. He earned his Ph.D. in Computer and Information Science from the University of Pennsylvania in 2021, where he was advised by Insup Lee and Osbert Bastani. His research interest focuses on designing trustworthy AI systems by understanding from theory to implementation and by considering practical applications in computer security, computer vision, robotics, cyber-physical systems, and natural language processing. 
 

Homepage: https://sangdon.github.io/