> ÇмúÇà»ç > ±¹³»Çмú´ëȸ
KAIST
POSTECH
°í·Á´ëÇб³
¼¿ï´ëÇб³
Abstract:
±â°èÇнÀ°ú Åë°è ¹× È®·üÀº ¼·Î ¹ÐÁ¢ÇÑ ¿¬°ü¼ºÀÌ ÀÖ´Ù. Àΰø½Å°æ¸Á µîÀÇ ±â°èÇнÀ ¸ðµ¨Àº ¸í½ÃÀûÀ¸·Î Á¦½ÃµÇÁö ¾Ê¾ÒÀ¸¸ç, µ¥ÀÌÅÍ·Î ¾Ï½ÃµÇ´Â ÇÔ¼öÀÇ ±Ù»ç ÃßÁ¤À̶ó°í »ý°¢ÇÒ ¼ö ÀÖ´Ù. ±×¸®°í ÀÌ·± ±â°èÇнÀ ¸ðµ¨µéÀÇ ÇнÀ°úÁ¤Àº ÃßÁ¤µÇ´Â ÇÔ¼öÀÇ ÆĶó¹ÌÅÍ ÃÖÀûÈ °úÁ¤ÀÌ´Ù. ÀÌ ¼¼¼Ç¿¡¼´Â ±â°èÇнÀ ¸ðµ¨ÀÇ Åë°è ¹× È®·üÀû ÇÔÀǸ¦ ¾Ë¾Æº¸°í, ÀÌÈÄ µ¿°è°Á¸¦ ÅëÇØ ¼³¸íµÉ ´Ù¾çÇÑ ±â¹ýµéÀÇ Åë°è ¹× È®·üÀûÀÎ Àǹ̸¦ »ìÆ캸´Â ½Ã°£À» °¡Áú ¿¹Á¤ÀÌ´Ù.
Bio:
2011 - Çö Àç : KAIST »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú ±³¼ö 2008 - 2011 : KAIST ¹Ú»çÈÄ ¿¬±¸¿ø 2005 - 2008 : Carnegie Mellon University School of Computer Science ¹Ú»ç
º» °¿¬¿¡¼´Â ½ÇÁ¦ »ý¹°ÇÐÀûÀÎ ´º·±°ú ½Ã³À½ºÀÇ ÀÛµ¿ ¿ø¸®¸¦ ¸ð¹æÇÏ°í ÁøÈ ¾Ë°í¸®ÁòÀ» ÅëÇØ ÇнÀÇÒ ¼ö ÀÖ´Â ÁøÈ °¡´ÉÇÑ ½Å°æ¸Á ´ÜÀ§¸¦ Á¦¾ÈÇÑ ¿¬±¸¸¦ ¼Ò°³ÇÑ ÈÄ, ½º½º·Î °æ±â »óȲÀ» ÀνÄÇÑ ÈÄ °æ±â Àü·«À» ¼ö¸³ÇÏ°í ½ÇÁ¦ ÄøµÀåÀÇ ¾óÀ½ ÆÇ¿¡¼ °æ±â ¼öÇàÀÌ °¡´ÉÇÑ ÀΰøÁö´É Äøµ ·Îº¿ ±â¼úÀ» °¢°¢ ¼Ò°³ÇÏ°íÀÚ ÇÑ´Ù. ±âÁ¸ÀÇ Àΰø½Å°æ¸Á ¸ðµ¨Àº ´Ù¾çÇÑ ºÐ¾ß¿¡¼ ¼º°øÀ» °ÅµÎ°í ÀÖÀ¸³ª, ÀÌ´Â Àΰ£ ³úÀÇ ¾ÆÁÖ ÀϺκÐÀÎ ´º·±°ú ½Ã³À½ºÀÇ ¿ø¸®¸¦ ¼öÇÐÀûÀ¸·Î ¸ðµ¨¸µÇÏ´Â °Í¿¡ ±×ÃÄ, ½º½º·Î ÇнÀÇÏ°í ÀÚ°¡ ÁøÈÇÏ´Â ´É·ÂÀº ºÎÁ·Çß´Ù. º» °¿¬¿¡¼ ¼Ò°³ÇÏ´Â ÁøÈ °¡´ÉÇÑ Àΰø ½Å°æ¸Á ¸ðµ¨Àº ½ÇÁ¦ Àΰ£ÀÇ ³ú¿¡¼ ´º·±°ú ½Ã³À½º°¡ º¹ÀâÇÏ°í ±ä ÁøÈÀÇ °úÁ¤À» ÅëÇØ ÇнÀÇÏ´Â °úÁ¤À» ¸ð¹æÇÏ¿© ¸¸µé¾ú´Ù. Àΰ£ÀÇ ³ú°¡ ¼öÇÐÀû ¸ðµ¨¸µ ¹× ÀÎÀ§ÀûÀ¸·Î ÆÐÅÏÀ» ±â¹ÝÀ¸·Î ÇнÀÇÏÁö ¾Ê´Â °Íó·³, Á¦¾ÈµÈ ¹æ¹ýÀº ±¸ÇöµÈ ½Å°æ¸Á ´ÜÀ§°¡ ½ÇÁ¦ ÁøÈ °úÁ¤À» Åä´ë·Î ÇнÀÇÏ´Â »õ·Î¿î ÀΰøÁö´É ±â¼úÀÌ´Ù. ¶ÇÇÑ, ·Îº¿ Á¦¾î ÀΰøÁö´É °ü·ÃÇÏ¿© ´ëºÎºÐÀÇ ±âÁ¸ ÀΰøÁö´É ·Îº¿¿¡ ´ëÇÑ ¿¬±¸´Â ÁÖ·Î °¡»ó ȯ°æ ¶Ç´Â Á¦¾àµÈ ȯ°æ¿¡¼ ÁøÇàµÇ°í ÀÖ´Ù. º» °¿¬¿¡¼ ¼Ò°³ÇÏ´Â Äøµ ·Îº¿ ÀΰøÁö´ÉÀº Å©°Ô Äøµ °æ±â ´ë¿ë·® µ¥ÀÌŸº£À̽º ±¸Ãà, ºùÆÇÀÇ ¸¶Âû º¯È ¹× ½ºÀ§ÇÎ È¿°ú µîÀÇ Ç¥ÇöÀÌ °¡´ÉÇÑ »ç½ÇÀû Äøµ ½Ã¹Ä·¹À̼Ç, ÄøµÀÇ ºÒÈ®½Ç¼ºÀ» ¹Ý¿µÇÑ µö·¯´× ±â¹ÝÀÇ Àü·« ¼ö¸³ ±â¼ú, °æ±â Áß¿¡ º¯ÇÏ´Â ºùÁú¿¡ µû¶ó ½Ç½Ã°£À¸·Î ÀûÀÀÇÏ´Â °È ÇнÀ ±â¹ÝÀÇ ºùÁú »óÅ ¿¹Ãø ±â¼ú µîÀ¸·Î ÀÌ·ç¾îÁ® ÀÖÀ¸¸ç, ÃÖÁ¾ÀûÀ¸·Î Äøµ ·Îº¿ÀÌ °æ±â »óȲ°ú ½ÇÁ¦ ȯ°æ¿¡ ¸Â´Â ÃÖÀûÀÇ Äøµ Àü·«À» ¼ö¸³ÇÏ°í °æ±â¸¦ ¼öÇàÇÒ ¼ö ÀÖµµ·Ï ÇÏ¿´´Ù.
2019 - : °í·Á´ëÇб³ ÀΰøÁö´ÉÇаú ÇаúÀå 2017 - 2019 : »ç´Ü¹ýÀÎ Çѱ¹ÀΰøÁö´ÉÇÐȸ ÃÊ´ë ȸÀå 2015 - 2017 : Çѱ¹Á¤º¸°úÇÐȸ ÀΰøÁö´É¼Ò»çÀ̾îƼ ȸÀå 2010 - : IEEE Fellow 2009 - : Çѱ¹°úÇбâ¼úÇѸ²¿ø Á¤È¸¿ø 2009 - 2021 : °í·Á´ëÇб³ ³ú°øÇаú ÇаúÀå 1995 - 2009 : °í·Á´ëÇб³ ÄÄÇ»ÅÍÇаú ºÎ±³¼ö, Á¤±³¼ö
º» °¿¬¿¡¼´Â Ãֱ٠ȵΰ¡ µÇ°í ÀÖ´Â Adversarial Machine Learning°ú °ü·ÃÇÑ ÃÖ±Ù ¿¬±¸µ¿ÇâÀ» ¼Ò°³ÇÑ´Ù. µö·¯´× ±â¼úÀº ±âÁ¸ ¹æ¹ý·ÐµéÀ» ¾ÐµµÇÏ´Â ¼º´ÉÀ¸·Î ´Ù¾çÇÑ ÄÄÇ»ÅͺñÀü ¹®Á¦µéÀ» – ¹°Ã¼ÀνÄ, 3Â÷¿øº¹¿ø, ¿µ»óºÐÇÒ, ¹°Ã¼ÃßÀû µî – ÇØ°áÇÏ¿© Àå¹Ìºû ȯ»óÀ» ½É¾îÁÖ°í ÀÖ´Ù. ºÐ¸í µö·¯´× ±â¼úÀº ¼ö ¸¹Àº ÀΰøÁö´ÉÀÇ ³Á¦µéÀ» ÇØ°áÇÒ ¼ö ÀÖ´Â °¡´É¼ºÀÌ ¸Å¿ì ³ôÀº ±â¼úÀÌÁö¸¸, Àΰ£½Ã°¢¿¡´Â ÀüÇô ¿µÇâÀ» ¹ÌÄ¡Áö ¾Ê´Â ÀÛÀº Å©±âÀÇ º¯È¿¡µµ ´ëÀÀÇÏÁö ¸øÇÏ´Â ±Ùº»ÀûÀÎ ÇÑ°è°¡ ÀÖ´Ù (Robustness Problem). º» °¿¬¿¡¼´Â ÀÌ °°Àº µö·¯´× ±â¼úÀÇ Robustness Problem¿¡ ´ëÇÑ KAIST-RCV ·¦¿¡¼ °³¹ßµÈ ÃÖ±Ù ¿¬±¸°á°ú¸¦ Áß½ÉÀ¸·Î ¹®Á¦ÇØ°áÀÇ °¡´É¼º°ú ÇѰ踦 ¼³¸íÇÑ´Ù. ±¸Ã¼ÀûÀ¸·Î´Â »õ·Î¿î Universal Adversarial Perturbations (UAP) ¸ðµ¨À» ¼Ò°³ÇÏ°í Robustness¸¦ ¿ªÀ¸·Î ÀÌ¿ëÇÑ ¾ÏÈ£È – ½ºÅ×°¡³ë±×·¡ÇÇ(Steganogrphy) – ¸¦ À§ÇÑ »õ·Î¿î Universal Deep Hiding (UDH) ¸ðµ¨À» ¼Ò°³ÇÑ´Ù.
1981³â ¼¿ï´ë °ø´ë ±â°è¼³°èÇаú¸¦ Á¹¾÷ÇÏ°í µ¿´ëÇпø¿¡¼ ¼®»çÇÐÀ§¸¦ 1983³â¿¡ ÃëµæÇÏ¿´´Ù. 1984³â8¿ù±îÁö Çѱ¹±â°è¿¬±¸¿ø¿¡¼ ¿¬±¸¿øÀ¸·Î ±Ù¹«ÇÑ ÈÄ, Ä«³×±â¸á·Ð´ëÇп¡¼ 1990³â ·Îº¸Æ½½º ¹Ú»çÇÐÀ§¸¦ ÃëµæÇÏ¿´´Ù. ÀÌÈÄ 1992³â±îÁö µµ½Ã¹Ù R&D¼¾ÅÍ¿¡¼ ¿¬±¸¿øÀ¸·Î ±Ù¹«ÇÏ´Ù°¡, µ¿³â 3¿ù Ä«À̽ºÆ®·Î ºÎÀÓÇÏ¿© ÇöÀç±îÁö Àü±â¹×ÀüÀÚ°øÇкΠ±³¼ö·Î ÀçÁ÷ÇÏ°í ÀÖ´Ù. ÇÐȸ È°µ¿À¸·Î´Â 2016³â ·Îº¿ÇÐȸ ȸÀå, 2017~2018³â ÄÄÇ»ÅͺñÀüÇÐȸ ȸÀå, 2019³â International Conference on Computer Vision (ICCV) ÇÐȸ Program Chair·Î ºÀ»çÇÏ¿´´Ù. ¼ö»ó½ÇÀûÀ¸·Î´Â 2015³â DARPA Robot Challenge¿¡¼ Team KAISTÀÇ ·Îº¿½Ã°¢½Ã½ºÅÛÀ» °³¹ßÇÏ¿© ¿ì½Â¿¡ ±â¿©ÇÏ¿´À¸¸ç, 2014³â IEEE CSVT Best Paper Award, 2016³â ·Îº¿´ë»ó ±¹¹«ÃѸ®Ç¥Ã¢, Ä«À̽ºÆ® 50Áֳ⠱â³ä Çмú´ë»ó ¼ö»ó µîÀÌ ÀÖ´Ù.
¿¬¼Ó/Áö¼Ó ÇнÀ (Continual Learning)Àº ¿¬¼ÓÀûÀ¸·Î ÁÖ¾îÁö´Â °æÇèµé¿¡ ´ëÇØ, ¿¹Àü Áö½ÄÀº ÃÖ´ëÇÑ ÀØÁö ¾ÊÀ¸¸é¼ »õ·Î¿î Áö½ÄÀ» Áö¼ÓÀûÀ¸·Î ¹è¿ï ¼ö ÀÖ´Â ±â°è ÇнÀ ¸ðµ¨À» ¿¬±¸ÇÑ´Ù. º» °ÀÇ¿¡¼´Â Continual Learning ´ëÇ¥Àû Á¢±Ù¹ýÀÎ Regularization, Reply memory, Expansion ±â¹ÝÀÇ ¹æ¹ý·ÐµéÀ» »ìÆ캸°í, °¿¬ÀÚÀÇ ¿¬±¸ ±×·ì¿¡¼ ÃÖ±Ù¿¡ °³¹ßÇÑ Nonparametric BayesianÀ» ÀÌ¿ëÇÑ Task-free Continual Learning ¹æ¹ý·Ð°ú Multi-label ClassifationÀ» À§ÇÑ Continual Learning ¹æ¹ý·ÐÀ» ¼³¸íÇÑ´Ù
2015 - ÇöÀç : ¼¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö 2013 - 2015 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research 2009 - 2013 : Carnegie Mellon University Àü»êÇÐ ¹Ú»ç
¸ÞŸÇнÀ, ȤÀº “Learning to Learn”Àº ±â°èÇнÀ ¸ðµ¨ÀÇ ÀϹÝÈ ¼º´ÉÀ» ³ôÀ̱â À§ÇÏ¿© ƯÁ¤ ŽºÅ©¿¡ ÇнÀÇÏ´Â °ÍÀÌ ¾Æ´Ñ ÇÑ Å½ºÅ©ÀÇ ºÐÆ÷¿¡ ÀϹÝÈÇϵµ·Ï ´Ù¾çÇÑ Å½ºÅ©¿¡ ´ëÇÏ¿© ÇнÀÇÏ´Â ¹æ¹ýÀ» ¸»ÇÑ´Ù. º» Æ©Å丮¾ó¿¡¼´Â ¸Þ¸ð¸® ±â¹Ý ¸ÞŸÇнÀ, ¸ÞÆ®¸¯ ±â¹Ý ¸ÞŸÇнÀ, ±×·¹µð¾ðÆ® ±â¹Ý ¸ÞŸÇнÀ µî ´Ù¾çÇÑ ÃֽŠ¸ÞŸÇнÀ ¹æ¹ý·Ð°ú ÀÌÀÇ ¼Ò¼ö¼¦, RLµîÀÇ ÀÀ¿ë¿¡ÀÇ Àû¿ëÀ» ¾Ë¾Æº»´Ù.
2020~ Ä«À̽ºÆ® ÀΰøÁö´É´ëÇпø/Àü»êÇкΠºÎ±³¼ö
2018~2019 Ä«À̽ºÆ® Àü»êÇкΠÁ¶±³¼ö
2014~2017 ¿ï»ê°úÇбâ¼ú¿ø ÀüÀÚÀü±âÄÄÇ»ÅÍ°øÇкΠÁ¶±³¼ö
2013~2014 µðÁî´Ï ¸®¼Ä¡ ¹Ú»çÈÄ¿¬±¸¿ø
2013 University of Texas, Austin ¹Ú»ç Á¹¾÷
±â°èÇнÀÀº ÀϹÝÀûÀ¸·Î ¸ñÀûÇÔ¼ö¸¦ ¼³Á¤ÇÏ°í ±× ¸ñÀûÇÔ¼ö¸¦ ÃÖÀûȸ¦ ÇÏ´Â °úÁ¤À¸·Î ÁøÇàµÈ´Ù. ÃÖÀûȸ¦ À§ÇÏ¿© ¸¹Àº ¹æ¹ýÀÌ Á¦½ÃµÇ¾úÀ¸¸ç »óȲ¿¡ µû¶ó ´Ù¸¥ ¹æ¹ýµéÀ» »ç¿ëÇÏ¸ç ¼º´ÉÀ» ³ô¿©¿Ô´Ù. º» °ÀÇ¿¡¼´Â ÃÖ±Ù ±â°èÇнÀ ¾Ë°í¸®ÁòµéÀÌ »ç¿ëÇÏ´Â ÃÖÀûÈ ±â¹ýµéÀ» ÀÌÇØÇϱâ À§ÇÏ¿© convex ÇÔ¼ö ÃÖÀûÈ¿¡¼ºÎÅÍ ½ÃÀÛÇÏ¿© ±âÃÊÀûÀÎ ÃÖÀûÈ ÀÌ·ÐÀ» ¼³¸íÇÏ°í, À̸¦ ¹ÙÅÁÀ¸·Î ÃֽŠ±â¹ýµéÀ» ¼³¸íÇÑ´Ù. Gradient descent¿Í stochastic gradient descent ±â¹ÝÀÇ ADAM, RMSProp, µîÀÇ ¹æ¹ýÀÌ ¼Ò°³µÇ¸ç ºÐ»ê ÃÖÀûÈ ±â¹ý, gradient¸¦ »ç¿ëÇÏÁö ¾Ê´Â ÃÖÀûÈ ±â¹ýÀ» ¼Ò°³ÇÏ¸ç °ÀǸ¦ ¸¶¹«¸®ÇÑ´Ù.
Çö, KAIST AI ´ëÇпø Á¶±³¼ö (2017. 7~) ¹Ì±¹ ·Î½º¾Ë¶ó¸ð½º¿¬±¸¼Ò ¹Ú»çÈÄ¿¬±¸¿ø (2016. 4 ~ 2017.7) ¸¶ÀÌÅ©·Î¼ÒÇÁÆ® ¿¬±¸¼Ò (¿µ±¹ Ä·ºê¸®Áö) ¹æ¹®¿¬±¸¿ø (2015. 6~ 2016. 3) ¸¶ÀÌÅ©·Î¼ÒÇÁÆ®-INRIA ¿¬±¸¼Ò ¹Ú»çÈÄ¿¬±¸¿ø (2014. 4~ 2015. 4) ½º¿þµ§ ¿Õ¸³ °ø°ú´ëÇÐ KTH ¹Ú»çÈÄ¿¬±¸¿ø (2013. 2~2014. 3)
Machine Learning Theory
NFs offer an answer to a long-standing question in machine learning: How one can define faithful probabilistic models for complex high-dimensional data. NFs solve this problem by means of non-linear bijective mappings from simple distributions (e.g. multivariate normal) to the desired target distributions. These mappings are implemented with invertible neural networks and thus have high expressive power and can be trained by gradient descent in the usual way. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation thanks to bijectivity. This session will explain the theoretical underpinnings of NFs, show various practical implementation options, clarify their relationships with the other generative models such as GANs and VAEs.
Dongwoo Kim is an assistant professor at the Department of Computer Science and Engineering, POSTECH. Prior to POSTECH, he worked as an assistant professor (lecturer) / research fellow at the Australian National University. He received his Ph.D. from KAIST in 2015, under the supervison of professor Alice Oh. Applications in his interests include machine learning and its implication on human understanding.
±×·¡ÇÁ´Â °³Ã¼µé °£ÀÇ ¿¬°á °ü°è¸¦ Ç¥ÇöÇÏ´Â µ¥ÀÌÅÍ ±¸Á¶·Î¼ ½Ç»ýÈ°ÀÇ ´Ù¾çÇÑ Çö»óµéÀ» Ç¥ÇöÇϴµ¥ ³Î¸® »ç¿ëµÇ¸ç, ´ëÇ¥ÀûÀÎ ¿¹½Ã·Î´Â »ç¿ëÀÚÀÇ ¼Ò¼È ³×Æ®¿öÅ©, Áö½Ä ±×·¡ÇÁ, ºÐÀÚ ±¸Á¶ ±×·¡ÇÁ, ´Ü¹éÁú °£ ¹ÝÀÀ ±×·¡ÇÁ, À¯ÀüÀÚ ±×·¡ÇÁ µîÀ» µé ¼ö ÀÖ´Ù. ±×·¡ÇÁ ±â¹Ý ±â°èÇнÀ ¸ðµ¨ÀÇ ¼º´É Çâ»óÀ» À§Çؼ´Â ±×·¡ÇÁÀÇ ±¸Á¶¸¦ °í·ÁÇØ node ¹× edgeÀÇ representationÀ» ÇнÀ Çϴ°ÍÀÌ ÇÙ½ÉÀ̸ç, À̸¦ À§ÇØ ÃÖ±Ù µö·¯´× ±â¼úÀÇ ¹ßÀü°ú ÇÔ²² ±×·¡ÇÁ ºÐ¼®À» À§ÇÑ ±â°èÇнÀ ±â¹ýÀÌ °¢±¤À» ¹Þ°í ÀÖ´Ù. º» °¿¬¿¡¼´Â Graph Representation LearningÀ» À§ÇÑ µö·¯´× ±â¹Ý ÃֽŠ±â¼ú ¹× ¿¬±¸ µ¿ÇâÀ» ¼Ò°³Çϸç, ±×·¡ÇÁ ±â°èÇнÀ ±â¼úÀÇ ÀÀ¿ëºÐ¾ß¿¡ ´ëÇØ ¼Ò°³ÇÑ´Ù. »Ó¸¸ ¾Æ´Ï¶ó, GNNÀÇ È¿°úÀûÀÎ ÇнÀÀ» À§ÇÑ self-supervised learning ±â¹ý¿¡ ´ëÇؼµµ ¼Ò°³ÇÑ´Ù.
2020 - Çö Àç : KAIST »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú Á¶±³¼ö 2019 - 2020 : University of Illinois at Urbana-Champaign ¹Ú»ç ÈÄ ¿¬±¸¿ø 2019 : POSTECH ÄÄÇ»ÅÍ°øÇÐ ¹Ú»ç