Site
»çÀÌÆ®
ȸ¿øÁ¤º¸
³í¹®
ÇÐȸ¼Ò°³
ÀΰøÁö´ÉÇÐȸ
ȸÀåÀλç
¿¬Çõ
ÀÓ¿ø¸í´Ü
Á¤°ü
ÇùÂù±â°ü
¿À½Ã´Â ±æ
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
ÃâÆǹ°
Çмú´ëȸ ³í¹®Áý
°Ô½ÃÆÇ
ÇÐȸ¼Ò½Ä
Çà»ç¾È³»
±¸ÀÎ/±¸Á÷¶õ
ȸ¿øÁ¤º¸
ȸ¿ø°¡ÀԾȳ»
Ưº°È¸¿ø»ç
ÇмúÇà»ç
Korean AI Association
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
2021 Çѱ¹ÀΰøÁö´ÉÇÐȸ ÇÏ°èÇмú´ëȸ
Çà»ç¾È³»
ÇÁ·Î±×·¥
¹ßÇ¥ÀÚ ¹× ÃÊ·Ï
ÇÁ·Î±×·¥ ºÏ
¼¼ºÎ ÇÁ·Î±×·¥
³í¹®¸ðÁý/³í¹®¹ßÇ¥ Áöħ
³í¹®Á¦Ã⠾ȳ»
ÈÄ¿ø ¹× ÇùÂù
Æ÷½ºÅÍ ¼¼¼Ç
»çÀüµî·Ï
> ÇмúÇà»ç >
±¹³»Çмú´ëȸ
±¹³»Çмú´ëȸ
¹ßÇ¥ÀÚ ¹× ÃÊ·Ï
Plenary Talk
7¿ù 9ÀÏ (±Ý)
10:00-10:50
Title:
Generalization in Data-Driven Control
Prof. Sergey Levine
(Abstract & Bio)
UC Berkeley
7¿ù 9ÀÏ (±Ý)
13:00-13:50
Title:
Embedded Convex Optimization for Control
Prof. Stephen Boyd
(Abstract & Bio)
Stanford University
ÃÊû ¿¬»ç
7¿ù 9ÀÏ (±Ý)
9:00-9:50
Title:
ÀΰøÁö´ÉÀº ¾î¶»°Ô Àΰ£ÀÇ ³ú¿Í °áÇÕÇÒ °ÍÀΰ¡?
Á¤Àç½Â ±³¼ö
(Abstract & Bio)
KAIST
ÃÊû °¿¬
7¿ù 8ÀÏ (¸ñ)
17:10-17:40
Title: µ¥ÀÌÅÍ AI ½Ã´ë µðÁöÅÐ ´ëÀüȯÀÇ ¸®´õ½Ê
Â÷»ó±Õ ±³¼ö
¼¿ï´ë µ¥ÀÌÅÍ»çÀ̾𽺠´ëÇпøÀå
Tutorials
7¿ù 8ÀÏ (¸ñ)
13:00-14:50
Title:
Online Decision Making: from Contextual Bandits to Reinforcement Learning
¿À¹Îȯ ±³¼ö
(Abstract & Bio)
¼¿ï´ë
Title:
Recent Trends on Session-based Recommendation System
ÀÌÁ¾¿í ±³¼ö
(Abstract & Bio)
¼º±Õ°ü´ë
Title:
Multi-resolution for Graphs: Graphical Model to Graph Classification for Alzheimer's Disease Analysis
±è¿øÈ ±³¼ö
(Abstract & Bio)
Æ÷Ç×°ø´ë
7¿ù 8ÀÏ (¸ñ)
15:10-17:00
Title:
Overview of Task-Free Continual Learning
±è°ÇÈñ ±³¼ö
(Abstract & Bio)
¼¿ï´ë
Title:
Large Language Models
¼¹ÎÁØ ±³¼ö
(Abstract & Bio)
KAIST
Abstract & Biography
Title: Generalization in Data-Driven Control
Prof. Sergey Levine (UC Berkeley)
Abs:
Current machine learning methods are primarily for tackling prediction problems, which are almost cast as supervised learning tasks. Despite decades of advances in reinforcement learning and learning-based control, the applicability of these methods to domains that require open-world generalization -- autonomous driving, robotics, aerospace, and other applications, -- remain challenging. Realistic environments require effective generalization, and effective generalization requires training on large and diverse datasets that are representative of the likely-time scenarios. I will discuss why this poses a particular challenge for learning-based control, and present some recent research directions that aim to address this challenge. I will discuss how offline reinforcement learning algorithms can make it possible for learning-based control systems to utilize large and diverse real-world datasets, how the use of diverse data can enable robotic systems to navigate real-world environments, and how multi-task and contextual policies can enable broad generalization to a range of user-specific goals.
Bio:
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, a Ph D., in Computer Science from Standford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Science at Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Title: Embedded Convex Optimization for Control
Prof. Stephen Boyd (Stanford University)
Abs:
Control policies that involve the real-time solution of one or more convex optimization problems include model predictive (or receding horizon) control, approximate dynamic programming, and optimization based actuator allocation systems. They have been widely used in applications with slower dynamics, such as chemical process control, supply chain systems, and quantitative trading, and are now starting to appear in systems with faster dynamics. In this talk I will describe a number of advances over the last decade or so that make such policies easier to design, tune, and deploy. We describe solution algorithms that are extremely robust, even in some cases division free, and code generation systems that transform a problem description expressed in a high level domain specific language into source code for a real-time solver suitable for control. The recent development of systems for automatically differentiating through a convex optimization problem can be used to efficiently tune or design control policies that include embedded convex optimization.
Bio:
Stephen Boyd is the Samsung Professor of Engineering, and Professor and Chair of Electrical Engineering at Stanford University, with courtesy appointments in Computer Science and Management Science and Engineering. He received the A.B. degree in Mathematics from Harvard University in 1980, and the Ph.D. in Electrical Engineering and Computer Science
from the University of California, Berkeley, in 1985, before joining the faculty at Stanford. His current research focus is on convex optimization applications in control, signal processing, machine learning, finance, and circuit design. He is a member of the US National Academy of Engineering, a foreign member of the Chinese Academy of Engineering, and a foreign member of the National Academy of Korea.
Â÷»ó±Õ ±³¼ö
¼¿ï´ë µ¥ÀÌÅÍ»çÀ̾𽺠´ëÇпøÀå
Bio:
1980.2 ¼¿ï´ë Àü±â°øÇаú Çлç
1982.2 ¼¿ï´ë Á¦¾î°èÃø°øÇаú ¼®»ç
1991.7 Stanford ´ëÇб³ Àü±â°øÇÐ(ÄÄÇ»ÅÍ ½Ã½ºÅÛ) ¹Ú»ç
1992~ ÇöÀç ¼¿ï´ë Àü±âÁ¤º¸°øÇкΠ±³¼ö
2001-2002 Standford ´ëÇб³ Àü»êÇаú ¹æ¹®±³¼ö
2014 ~ ÇöÀç ¼¿ï´ë ºòµ¥ÀÌÅÍ¿¬±¸¿øÀå
Title: ÀΰøÁö´ÉÀº ¾î¶»°Ô Àΰ£ÀÇ ³ú¿Í °áÇÕÇÒ °ÍÀΰ¡?
Á¤Àç½Â ±³¼ö
KAIST
Abs:
ÀΰøÁö´ÉÀº Àΰ£ Áö¼º°ú ¹«¾ùÀÌ ´Ù¸¥°¡? ÀΰøÁö´ÉÀº ¾î¶»°Ô Àΰ£ÀÇ ³ú¿Í °áÇÕÇÒ °ÍÀΰ¡? º» °¿¬¿¡¼´Â ÃÖ±Ù ¶ß°Å¿î À̽´ÀÎ ÀΰøÁö´É°ú Àΰ£Áö¼ºÀ» ºñ±³ÇÏ°í ±× À¯»çÁ¡°ú Â÷ÀÌÁ¡À» ¾Ë¾Æº¸°íÀÚ ÇÑ´Ù. ´õ ³ª¾Æ°¡ Àΰ£ÀÇ ³ú·ÎºÎÅÍ ¹«¾ùÀ» ¾î¶»°Ô ¾ò¾î ÀΰøÁö´ÉÀÇ »õ·Î¿î ÁöÆòÀ» ¿ ¼ö ÀÖÀ»Áö (Brain-inspired A.I.) °í¹ÎÇغ¸°íÀÚ ÇÑ´Ù. ¾Æ¿ï·¯, Àΰ£ÀÇ ³ú¿Í ÀΰøÁö´É ·Îº¿À» °áÇÕÇÏ´Â ³ú-±â°è ÀÎÅÍÆäÀ̽º (Brain-machine Interface) ±â¼úÀÇ ÇöÁÖ¼Ò¸¦ ¾Ë¾Æº¸°í ÇâÈÄ ¹Ì·¡¸¦ Àü¸ÁÇÏ°íÀÚ ÇÑ´Ù.
Bio:
KAIST ¹°¸®Çаú¿¡¼ ¹°¸®ÇÐ Çлç, ¹°¸®ÇÐ ¼®»ç, ¹°¸®ÇÐ ¹Ú»çÇÐÀ§¸¦ ¹Þ°í, ¹Ì±¹ ¿¹ÀÏÀÇ´ë Á¤½Å°ú ¿¬±¸¿ø, °í·Á´ëÇб³ ¹°¸®Çаú ¿¬±¸±³¼ö, Ä«À̽ºÆ® ¹ÙÀÌ¿À ¹× ³ú°øÇаú Á¶±³¼ö, ÄÝ·Òºñ¾ÆÀÇ´ë Á¤½Å°ú Á¶±³¼ö µîÀ» °ÅÃÄ ÇöÀç KAIST ¹ÙÀÌ¿À¹×³ú°øÇаú ±³¼ö ¹× À¶ÇÕÀÎÀçÇкΠÇкÎÀåÀ¸·Î ÀçÁ÷ ÁßÀÌ´Ù. ÇöÀç Çѱ¹°è»ê½Å°æ°úÇÐȸ ȸÀåÀÌ´Ù.
¿¬±¸ ºÐ¾ß´Â ÀÇ»ç°áÁ¤ ½Å°æ°úÇÐÀ̸ç, À̸¦ ¹ÙÅÁÀ¸·Î Á¤½ÅÁúȯ ´ë³ú ¸ðµ¨¸µ°ú ³ú-ÄÄÇ»ÅÍ ÀÎÅÍÆäÀ̽º ºÐ¾ß, ³ú±â¹Ý ÀΰøÁö´É, ½Å°æ°ÇÃàÇÐÀ» ¿¬±¸ÇÏ°í ÀÖ´Ù. ³×ÀÌó, ³×ÀÌó ¸Þµð½¼, ³×ÀÌó Ä¿¹Â´ÏÄÉÀ̼ǽº µî ÇØ¿Ü Àú³Î¿¡ 90¿©ÆíÀÇ ³í¹®À» Ãâ°£ÇÑ ¹Ù ÀÖÀ¸¸ç, 2009³â ¼¼°è°æÁ¦Æ÷·³(´Ùº¸½º Æ÷·³)¿¡¼ ‘Â÷¼¼´ë ±Û·Î¹ú ¸®´õ’·Î ¼±Á¤µÇ¾úÀ¸¸ç, ´Ù¼öÀÇ Çмú»óÀ» ¼ö»óÇÑ ¹Ù ÀÖ´Ù.
Title: Online Decision Making: from Contextual Bandits to Reinforcement Learning
¿À¹Îȯ ±³¼ö
¼¿ï´ë
Abs:
º» Æ©Å丮¾ó¿¡¼´Â ¼øÂ÷Àû ÀÇ»ç°áÁ¤ÀÇ ÀϹÝÀû ¹®Á¦µé°ú ÀÇ»ç °áÁ¤ ¾Ë°í¸®Áò ¹× À̷еéÀ» ¼Ò°³ÇÑ´Ù. ÃÖÀûÀÇ Á¤Ã¥(policy)À» ÇнÀÇÏ´Â °úÁ¤ Áß¿¡ ¹ß»ýÇÏ´Â exploration-exploitation tradeoff ¹× ÀϹÝÈ(generalization)ÀÇ ¹®Á¦¸¦ ÇØ°áÇϱâ À§ÇÑ contextual bandit ±â¹ýµéÀ» ´Ù·é´Ù. ¶ÇÇÑ contextual bandit ±â¹ýÀÇ È°¿ë ¹× È®ÀåÀ» ÅëÇÑ provably efficient °ÈÇнÀ ¿¬±¸¿¡ ´ëÇØ ¼Ò°³ÇÑ´Ù.
Bio:
¿À¹Îȯ ±³¼ö´Â Columbia´ëÇп¡¼ 2015³â ¼öÇÐ ¹× Åë°è Çлç, 2016³â°ú 2020³â¿¡ Operations Research ¼®»ç ¹× ¹Ú»ç ÇÐÀ§¸¦ ¹Þ¾Ò´Ù. 2020³â 9¿ùºÎÅÍ ¼¿ï´ëÇб³ µ¥ÀÌÅÍ»çÀ̾𽺴ëÇпø¿¡ ÀçÁ÷ ÁßÀ̸ç, °ÈÇнÀ, ¹êµ÷ ¾Ë°í¸®Áò, Åë°èÀû ±â°èÇнÀ ºÐ¾ßÀÇ ¿¬±¸¸¦ ¼öÇàÇÏ°í ÀÖ´Ù.
Title: Recent Trends on Session-based Recommendation System
ÀÌÁ¾¿í ±³¼ö
¼º±Õ°ü´ë
Abs:
¼¼¼Ç ±â¹Ý Ãßõ ½Ã½ºÅÛÀº »ç¿ëÀÚ°¡ ·Î±×ÀεÇÁö ¾ÊÀº ȯ°æ¿¡¼ »ç¿ëÀÚÀÇ ÇöÀç Á¤º¸¸¦ ±â¹ÝÀ¸·Î, »ç¿ëÀÚ°¡ ¿øÇÏ´Â Ç׸ñ¿¡ ´ëÇÑ ÃßõÀ» Á¦°øÇØÁÖ´Â ¹æ¹ýÀÌ´Ù. ±âÁ¸ÀÇ ÀüÅëÀûÀÎ Ãßõ ¸ðµ¨ÀÌ Á¤Àû ȯ°æ¿¡¼ Àå±âÀûÀÎ »ç¿ëÀÚ ¼±È£µµ¸¦ ¸ðµ¨¸µÇÏ´Â ¹Ý¸é¿¡ ¼¼¼Ç ±â¹Ý Ãßõ ¸ðµ¨Àº µ¿Àû ȯ°æ¿¡¼ ´Ü±âÀûÀÎ »ç¿ëÀÚ ¼±È£µµ¸¦ Æ÷ÂøÇÏ´Â °ÍÀ» ¸ñÇ¥·Î ÇÑ´Ù. º» °¿¬¿¡¼´Â ºñ½Å°æ¸Á ¹× ½Å°æ¸Á ±â¹Ý ¼¼¼Ç ±â¹Ý Ãßõ ¸ðµ¨À» ¼Ò°³ÇÏ°í, ÃÖ±Ù ¿¬±¸
°á°ú¸¦ ÅëÇØ ÇâÈÄ ¿¬±¸ ¹æÇâ¿¡ ´ëÇؼ ³íÀÇÇÑ´Ù.
Bio:
*ÇзÂ
¼º±Õ°ü´ëÇб³ ÄÄÇ»ÅÍ°øÇаú Çлç(1999³â 3¿ù-2006³â 2¿ù)
Æ÷Ç×°ø°ú´ëÇб³ ÄÄÇ»ÅÍ°øÇаú ¹Ú»ç(2006³â 3¿ù-2012³â 2¿ù)
*°æ·Â
Æ÷Ç×°ø°ú´ëÇб³ ÄÄÇ»ÅÍ°øÇаú ¹Ú»ç ÈÄ ¿¬±¸¿ø(2012³â 3¿ù-2012³â
11¿ù)
Pennsylvania State University, ¹Ú»ç ÈÄ ¿¬±¸¿ø(2012³â 12¿ù-2014³â
8¿ù)
Çѱ¹¿Ü±¹¾î´ëÇб³, ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö(2014³â 9¿ù-2016³â 8¿ù)
¼º±Õ°ü´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú Á¶±³¼ö(2016³â 9¿ù-2020³â 8¿ù)
¼º±Õ°ü´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú ºÎ±³¼ö(2020³â 9¿ù-ÇöÀç)
*¿¬±¸ºÐ¾ß
Ãßõ½Ã½ºÅÛ, Á¤º¸°Ë»ö, µ¥ÀÌÅ͸¶ÀÌ´×, ÀÚ¿¬¾îó¸®, ±â°èÇнÀ
*¿¬±¸½ÇÀû
±¹Á¦ ÇмúÁö 16Æí ¹× ±¹Á¦ Çмú´ëȸ 26 Æí °ÔÀç(WWW, SIGIR, ICDM,
ICDE, CIKM, CVPR, NAACL, VLDB, KDD, EDBT Æ÷ÇÔ)
Title: Multi-resolution for Graphs: Graphical Model to Graph Classification for Alzheimer's Disease Analysis
±è¿øÈ ±³¼ö
Æ÷Ç×°ø´ë
Abs:
Tremendous literature indicate that Brain Networks, defined by associations between different regions of interests (ROIs) in the brain, show early signs of neurodegenerative diseases. Brain networks are naturally represented as Graphs that consists of sets of nodes and edges. Their irregular structures differentiate the graphs from traditional imaging data in Euclidean spaces, and thus it requires sophisticated machine learning and signal processing techniques for a downstream analysis. In this talk, I will introduce our recent effort on analyzing graph data in neuroimaging studies using the concept of multi-resolution on graph structure. I will first introduce construction of multi-resolution graphical model, which will behave as a core component for Multi-resolution Edge Network (MENET) architecture for graph classification in the second part. Results on Brain Network analyses will be reported that identify disease-specific variations in the brain due to Alzheimer’s Disease (AD).
Bio:
Won Hwa Kim is an Assistant Professor in the Department of Computer Science and Engineering / Graduate School of AI at POSTECH, South Korea. He obtained his Ph.D in Computer Science from University of Wisconsin - Madison in 2017, M.S. in Robotics from KAIST in 2010 and B.S. in Information and Communication Engineering from Sungkyunkwan University in 2008. Prior to joining POSTECH, he was an Assistant Professor in the Department of Computer Science and Engineering at the University of Texas at Arlington (currently on leave-of-absence) from 2018 and was a Researcher in Data Science team at NEC Labs., America in 2017. He is a recipient of NSF IIS CRII from National Science Foundation (NSF) in the USA and Rising STARs Award from the University of Texas System.
15:10-17:00
Title: Overview of Task-Free Continual Learning
±è°ÇÈñ ±³¼ö
¼¿ï´ë
Abs:
¿¬¼Ó/Áö¼Ó ÇнÀ (Continual Learning)Àº ¿¬¼ÓÀûÀ¸·Î ÁÖ¾îÁö´Â µ¥ÀÌÅ͵é·ÎºÎÅÍ, ¿¹Àü Áö½ÄÀº ÃÖ´ëÇÑ ÀØÁö ¾ÊÀ¸¸é¼ »õ·Î¿î Áö½ÄÀ» Áö¼ÓÀûÀ¸·Î ¹è¿ï ¼ö ÀÖ´Â ±â°è ÇнÀ ¹æ¹ý·ÐÀ» ´Ù·é´Ù. º» °ÀÇ¿¡¼´Â ¿¬¼Ó ÇнÀÀÇ ±âÃÊÀûÀÎ ¼Ò°³¿Í ´õºÒ¾î, ÀÔ·Â µ¥ÀÌÅÍ¿¡ Å×½ºÅ© Á¤º¸°¡ ÁÖ¾îÁöÁö ¾Ê´Â »óȲ¿¡¼ÀÇ ¿¬¼Ó ÇнÀ ¹æ¹ý (Task-free Continual Learning ¿¡ ´ëÇؼ »ìÆ캻´Ù.
Bio:
2015 - ÇöÀç : ¼¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2015 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2009 - 2013 : Carnegie Mellon University Àü»êÇÐ ¹Ú»ç
Title: Large Language Models
¼¹ÎÁØ ±³¼ö
KAIST
Abs:
This tutorial consists of two parts. In the first part, we will learn the basics of language models, including traditional n-gram language models as well as modern LSTM and Transformer (Vaswani et al., 2017)-based ones. In the second part, I will discuss how large language models have transformed AI research in last 4 years, starting from ELMo (Peters et al., 2017) to GPT-3 (Brown et al., 2020).
Bio:
Minjoon Seo is an Assistant Professor at KAIST Graduate School of AI. He did PhD in Computer Science at the University of Washington and BS in Electrical Engineering & Computer Science at the University of California, Berkeley. His research interest is in natural language processing and machine learning, and in particular, how knowledge data can be encoded (e.g. external memory and language model), accessed (e.g. question answering and dialog), and produced (e.g. scientific reasoning). He is the recipient of Facebook Fellowship and AI2 Key Scientific Challenges Award. He previously co-organized MRQA 2018, MRQA 2019 and RepL4NLP 2020.