Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¹ßÇ¥ÀÚ ¹× ÃÊ·Ï
2¿ù 25ÀÏ (¸ñ)
 

KAIST

 

POSTECH

 

°í·Á´ëÇб³

 

KAIST

 

 
Machine Learning Basic
Probs and Statistics for ML
¹®ÀÏö ±³¼ö (KAIST)

Abstract:

±â°èÇнÀ°ú Åë°è ¹× È®·üÀº ¼­·Î ¹ÐÁ¢ÇÑ ¿¬°ü¼ºÀÌ ÀÖ´Ù. Àΰø½Å°æ¸Á µîÀÇ ±â°èÇнÀ ¸ðµ¨Àº ¸í½ÃÀûÀ¸·Î Á¦½ÃµÇÁö ¾Ê¾ÒÀ¸¸ç, µ¥ÀÌÅÍ·Î ¾Ï½ÃµÇ´Â ÇÔ¼öÀÇ ±Ù»ç ÃßÁ¤À̶ó°í »ý°¢ÇÒ ¼ö ÀÖ´Ù. ±×¸®°í ÀÌ·± ±â°èÇнÀ ¸ðµ¨µéÀÇ ÇнÀ°úÁ¤Àº ÃßÁ¤µÇ´Â ÇÔ¼öÀÇ ÆĶó¹ÌÅÍ ÃÖÀûÈ­ °úÁ¤ÀÌ´Ù. ÀÌ ¼¼¼Ç¿¡¼­´Â ±â°èÇнÀ ¸ðµ¨ÀÇ Åë°è ¹× È®·üÀû ÇÔÀǸ¦ ¾Ë¾Æº¸°í, ÀÌÈÄ µ¿°è°­Á¸¦ ÅëÇØ ¼³¸íµÉ ´Ù¾çÇÑ ±â¹ýµéÀÇ Åë°è ¹× È®·üÀûÀÎ Àǹ̸¦ »ìÆ캸´Â ½Ã°£À» °¡Áú ¿¹Á¤ÀÌ´Ù.

Bio:

2011 - Çö Àç : KAIST »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú ±³¼ö
2008 - 2011 : KAIST ¹Ú»çÈÄ ¿¬±¸¿ø
2005 - 2008 : Carnegie Mellon University School of Computer Science ¹Ú»ç 

 

Computer Vision Basic
Convolution, Transformer, and Capsules
Á¶¹Î¼ö ±³¼ö (POSTECH)

Abstract:

º» °­ÀÇ´Â ½Å°æ¸Á ¸ðµ¨ÀÇ ±â¹Ý ±¸Á¶·Î ÀÚ¸®Àâ¾Æ¿Â ÇÕ¼º°ö ½Å°æ¸Á (convolutional neural networks)ÀÇ Æ¯Â¡À» ¾Ë¾Æº¸°í, ÃÖ±Ù ÀÌ ´ÜÁ¡À» ±Øº¹ÇÏ´Â ´ë¾ÈÀûÀÎ ½Å°æ¸Á ±¸Á¶·Î ºÎ»óÇÏ°í ÀÖ´Â ÀÚ±âÁýÁß (self-attention) ±â¹ÝÀÇ Æ®·£½ºÆ÷¸Ó(Transformer) ½Å°æ¸Á°ú ¶ó¿ìÆà ±â¹ÝÀÇ Ä¸½¶ ½Å°æ¸Á (capsule networks) ±¸Á¶¸¦ »ìÆ캼 °ÍÀÌ´Ù. ¶ÇÇÑ, À̵éÀÌ °¡Áø Ãß·Ð Æí°ß (inductive bias)¸¦ ºÐ¼®Çغ¸¸é¼­ ´Ù¾çÇÑ Å×½ºÅ©¿¡ ÇнÀµÇ±â¿¡ ÀûÇÕÇÑ À¯¿¬ÇÑ ½Å°æ¸Á ±¸Á¶¿¡ ´ëÇÑ ÃÖ±Ù ¿¬±¸ ¹æÇâ¿¡ ´ëÇØ ³íÀÇÇغ¸°íÀÚ ÇÑ´Ù. º» °­ÀÇ´Â ½Å°æ¸Á ¸ðµ¨ÀÇ ±â¹Ý ±¸Á¶·Î ÀÚ¸®Àâ¾Æ¿Â ÇÕ¼º°ö ½Å°æ¸Á (convolutional neural networks)ÀÇ Æ¯Â¡À» ¾Ë¾Æº¸°í, ÃÖ±Ù ÀÌ ´ÜÁ¡À» ±Øº¹ÇÏ´Â ´ë¾ÈÀûÀÎ ½Å°æ¸Á ±¸Á¶·Î ºÎ»óÇÏ°í ÀÖ´Â ÀÚ±âÁýÁß (self-attention) ±â¹ÝÀÇ Æ®·£½ºÆ÷¸Ó(Transformer) ½Å°æ¸Á°ú ¶ó¿ìÆà ±â¹ÝÀÇ Ä¸½¶ ½Å°æ¸Á (capsule networks) ±¸Á¶¸¦ »ìÆ캼 °ÍÀÌ´Ù. ¶ÇÇÑ, À̵éÀÌ °¡Áø Ãß·Ð Æí°ß (inductive bias)¸¦ ºÐ¼®Çغ¸¸é¼­ ´Ù¾çÇÑ Å×½ºÅ©¿¡ ÇнÀµÇ±â¿¡ ÀûÇÕÇÑ À¯¿¬ÇÑ ½Å°æ¸Á ±¸Á¶¿¡ ´ëÇÑ ÃÖ±Ù ¿¬±¸ ¹æÇâ¿¡ ´ëÇØ ³íÀÇÇغ¸°íÀÚ ÇÑ´Ù. 

Bio:

2020 - ÇöÀç: POSTECH ÄÄÇ»ÅÍ°øÇаú ºÎ±³¼ö
2016 - 2019 : POSTECH ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2012 - 2016 : ÇÁ¶û½ºÄÄÇ»ÅÍ°úÇבּ¸¼Ò(INRIA) / Æĸ®°íµî»ç¹üÇб³ (ENS) ¿¬±¸¿ø
2012 : ¼­¿ï´ë Àü±âÄÄÇ»ÅÍ°øÇкΠ¹Ú»ç
 
Frontier of Machine Learning
µö·¯´× ÀÌ·Ð ¹× ½Ç¼¼°è ÀÀ¿ë: ºê·¹ÀÎ ½Ã³À½º ÇнÀ¿¡¼­ Äøµ ·Îº¿ Á¦¾î±îÁö
À̼ºÈ¯ ±³¼ö (°í·Á´ëÇб³)

Abstract:

º» °­¿¬¿¡¼­´Â ½ÇÁ¦ »ý¹°ÇÐÀûÀÎ ´º·±°ú ½Ã³À½ºÀÇ ÀÛµ¿ ¿ø¸®¸¦ ¸ð¹æÇÏ°í ÁøÈ­ ¾Ë°í¸®ÁòÀ» ÅëÇØ ÇнÀÇÒ ¼ö ÀÖ´Â ÁøÈ­ °¡´ÉÇÑ ½Å°æ¸Á ´ÜÀ§¸¦ Á¦¾ÈÇÑ ¿¬±¸¸¦ ¼Ò°³ÇÑ ÈÄ, ½º½º·Î °æ±â »óȲÀ» ÀνÄÇÑ ÈÄ °æ±â Àü·«À» ¼ö¸³ÇÏ°í ½ÇÁ¦ ÄøµÀåÀÇ ¾óÀ½ ÆÇ¿¡¼­ °æ±â ¼öÇàÀÌ °¡´ÉÇÑ ÀΰøÁö´É Äøµ ·Îº¿ ±â¼úÀ» °¢°¢ ¼Ò°³ÇÏ°íÀÚ ÇÑ´Ù. ±âÁ¸ÀÇ Àΰø½Å°æ¸Á ¸ðµ¨Àº ´Ù¾çÇÑ ºÐ¾ß¿¡¼­ ¼º°øÀ» °ÅµÎ°í ÀÖÀ¸³ª, ÀÌ´Â Àΰ£ ³úÀÇ ¾ÆÁÖ ÀϺκÐÀÎ ´º·±°ú ½Ã³À½ºÀÇ ¿ø¸®¸¦ ¼öÇÐÀûÀ¸·Î ¸ðµ¨¸µÇÏ´Â °Í¿¡ ±×ÃÄ, ½º½º·Î ÇнÀÇÏ°í ÀÚ°¡ ÁøÈ­ÇÏ´Â ´É·ÂÀº ºÎÁ·Çß´Ù. º» °­¿¬¿¡¼­ ¼Ò°³ÇÏ´Â ÁøÈ­ °¡´ÉÇÑ Àΰø ½Å°æ¸Á ¸ðµ¨Àº ½ÇÁ¦ Àΰ£ÀÇ ³ú¿¡¼­ ´º·±°ú ½Ã³À½º°¡ º¹ÀâÇÏ°í ±ä ÁøÈ­ÀÇ °úÁ¤À» ÅëÇØ ÇнÀÇÏ´Â °úÁ¤À» ¸ð¹æÇÏ¿© ¸¸µé¾ú´Ù. Àΰ£ÀÇ ³ú°¡ ¼öÇÐÀû ¸ðµ¨¸µ ¹× ÀÎÀ§ÀûÀ¸·Î ÆÐÅÏÀ» ±â¹ÝÀ¸·Î ÇнÀÇÏÁö ¾Ê´Â °Íó·³, Á¦¾ÈµÈ ¹æ¹ýÀº ±¸ÇöµÈ ½Å°æ¸Á ´ÜÀ§°¡ ½ÇÁ¦ ÁøÈ­ °úÁ¤À» Åä´ë·Î ÇнÀÇÏ´Â »õ·Î¿î ÀΰøÁö´É ±â¼úÀÌ´Ù. ¶ÇÇÑ, ·Îº¿ Á¦¾î ÀΰøÁö´É °ü·ÃÇÏ¿© ´ëºÎºÐÀÇ ±âÁ¸ ÀΰøÁö´É ·Îº¿¿¡ ´ëÇÑ ¿¬±¸´Â ÁÖ·Î °¡»ó ȯ°æ ¶Ç´Â Á¦¾àµÈ ȯ°æ¿¡¼­ ÁøÇàµÇ°í ÀÖ´Ù. º» °­¿¬¿¡¼­ ¼Ò°³ÇÏ´Â Äøµ ·Îº¿ ÀΰøÁö´ÉÀº Å©°Ô Äøµ °æ±â ´ë¿ë·® µ¥ÀÌŸº£À̽º ±¸Ãà, ºùÆÇÀÇ ¸¶Âû º¯È­ ¹× ½ºÀ§ÇÎ È¿°ú µîÀÇ Ç¥ÇöÀÌ °¡´ÉÇÑ »ç½ÇÀû Äøµ ½Ã¹Ä·¹À̼Ç, ÄøµÀÇ ºÒÈ®½Ç¼ºÀ» ¹Ý¿µÇÑ µö·¯´× ±â¹ÝÀÇ Àü·« ¼ö¸³ ±â¼ú, °æ±â Áß¿¡ º¯ÇÏ´Â ºùÁú¿¡ µû¶ó ½Ç½Ã°£À¸·Î ÀûÀÀÇÏ´Â °­È­ ÇнÀ ±â¹ÝÀÇ ºùÁú »óÅ ¿¹Ãø ±â¼ú µîÀ¸·Î ÀÌ·ç¾îÁ® ÀÖÀ¸¸ç, ÃÖÁ¾ÀûÀ¸·Î Äøµ ·Îº¿ÀÌ °æ±â »óȲ°ú ½ÇÁ¦ ȯ°æ¿¡ ¸Â´Â ÃÖÀûÀÇ Äøµ Àü·«À» ¼ö¸³ÇÏ°í °æ±â¸¦ ¼öÇàÇÒ ¼ö ÀÖµµ·Ï ÇÏ¿´´Ù.

Bio:

2019 -           : °í·Á´ëÇб³ ÀΰøÁö´ÉÇаú ÇаúÀå
2017 - 2019 : »ç´Ü¹ýÀÎ Çѱ¹ÀΰøÁö´ÉÇÐȸ ÃÊ´ë ȸÀå
2015 - 2017 : Çѱ¹Á¤º¸°úÇÐȸ ÀΰøÁö´É¼Ò»çÀ̾îƼ ȸÀå
2010 -           : IEEE Fellow 
2009 -           : Çѱ¹°úÇбâ¼úÇѸ²¿ø Á¤È¸¿ø 
2009 - 2021 : °í·Á´ëÇб³ ³ú°øÇаú ÇаúÀå
1995 - 2009 : °í·Á´ëÇб³ ÄÄÇ»ÅÍÇаú ºÎ±³¼ö, Á¤±³¼ö

 
Computer Vision
Vision with Attention
±ÇÀÎ¼Ò ±³¼ö (KAIST)

Abstract:

º» °­¿¬¿¡¼­´Â ÃÖ±Ù È­µÎ°¡ µÇ°í ÀÖ´Â Adversarial Machine Learning°ú °ü·ÃÇÑ ÃÖ±Ù ¿¬±¸µ¿ÇâÀ» ¼Ò°³ÇÑ´Ù. µö·¯´× ±â¼úÀº ±âÁ¸ ¹æ¹ý·ÐµéÀ» ¾ÐµµÇÏ´Â ¼º´ÉÀ¸·Î ´Ù¾çÇÑ ÄÄÇ»ÅͺñÀü ¹®Á¦µéÀ» ¹°Ã¼ÀνÄ, 3Â÷¿øº¹¿ø, ¿µ»óºÐÇÒ, ¹°Ã¼ÃßÀû µî ÇØ°áÇÏ¿© Àå¹Ìºû ȯ»óÀ» ½É¾îÁÖ°í ÀÖ´Ù. ºÐ¸í µö·¯´× ±â¼úÀº ¼ö ¸¹Àº ÀΰøÁö´ÉÀÇ ³­Á¦µéÀ» ÇØ°áÇÒ ¼ö ÀÖ´Â °¡´É¼ºÀÌ ¸Å¿ì ³ôÀº ±â¼úÀÌÁö¸¸, Àΰ£½Ã°¢¿¡´Â ÀüÇô ¿µÇâÀ» ¹ÌÄ¡Áö ¾Ê´Â ÀÛÀº Å©±âÀÇ º¯È­¿¡µµ ´ëÀÀÇÏÁö ¸øÇÏ´Â ±Ùº»ÀûÀÎ ÇÑ°è°¡ ÀÖ´Ù (Robustness Problem). º» °­¿¬¿¡¼­´Â ÀÌ °°Àº µö·¯´× ±â¼úÀÇ Robustness Problem¿¡ ´ëÇÑ KAIST-RCV ·¦¿¡¼­ °³¹ßµÈ ÃÖ±Ù ¿¬±¸°á°ú¸¦ Áß½ÉÀ¸·Î ¹®Á¦ÇØ°áÀÇ °¡´É¼º°ú ÇѰ踦 ¼³¸íÇÑ´Ù. ±¸Ã¼ÀûÀ¸·Î´Â »õ·Î¿î Universal Adversarial Perturbations (UAP) ¸ðµ¨À» ¼Ò°³ÇÏ°í Robustness¸¦ ¿ªÀ¸·Î ÀÌ¿ëÇÑ ¾Ïȣȭ ½ºÅ×°¡³ë±×·¡ÇÇ(Steganogrphy) – ¸¦ À§ÇÑ »õ·Î¿î Universal Deep Hiding (UDH) ¸ðµ¨À» ¼Ò°³ÇÑ´Ù.

Bio:

1981³â ¼­¿ï´ë °ø´ë ±â°è¼³°èÇаú¸¦ Á¹¾÷ÇÏ°í µ¿´ëÇпø¿¡¼­ ¼®»çÇÐÀ§¸¦ 1983³â¿¡ ÃëµæÇÏ¿´´Ù. 1984³â8¿ù±îÁö Çѱ¹±â°è¿¬±¸¿ø¿¡¼­ ¿¬±¸¿øÀ¸·Î ±Ù¹«ÇÑ ÈÄ, Ä«³×±â¸á·Ð´ëÇп¡¼­ 1990³â ·Îº¸Æ½½º ¹Ú»çÇÐÀ§¸¦ ÃëµæÇÏ¿´´Ù. ÀÌÈÄ 1992³â±îÁö µµ½Ã¹Ù R&D¼¾ÅÍ¿¡¼­ ¿¬±¸¿øÀ¸·Î ±Ù¹«ÇÏ´Ù°¡, µ¿³â 3¿ù Ä«À̽ºÆ®·Î ºÎÀÓÇÏ¿© ÇöÀç±îÁö Àü±â¹×ÀüÀÚ°øÇкΠ±³¼ö·Î ÀçÁ÷ÇÏ°í ÀÖ´Ù. ÇÐȸ È°µ¿À¸·Î´Â 2016³â ·Îº¿ÇÐȸ ȸÀå, 2017~2018³â ÄÄÇ»ÅͺñÀüÇÐȸ ȸÀå, 2019³â International Conference on Computer Vision (ICCV) ÇÐȸ Program Chair·Î ºÀ»çÇÏ¿´´Ù. ¼ö»ó½ÇÀûÀ¸·Î´Â 2015³â DARPA Robot Challenge¿¡¼­ Team KAISTÀÇ ·Îº¿½Ã°¢½Ã½ºÅÛÀ» °³¹ßÇÏ¿© ¿ì½Â¿¡ ±â¿©ÇÏ¿´À¸¸ç, 2014³â IEEE CSVT Best Paper Award, 2016³â ·Îº¿´ë»ó ±¹¹«ÃѸ®Ç¥Ã¢, Ä«À̽ºÆ® 50Áֳ⠱â³ä Çмú´ë»ó ¼ö»ó µîÀÌ ÀÖ´Ù.

 

Computer Vision
Overview and Recent Progress of Continual Learning
±è°ÇÈñ ±³¼ö (¼­¿ï´ëÇб³)

Abstract:

¿¬¼Ó/Áö¼Ó ÇнÀ (Continual Learning)Àº ¿¬¼ÓÀûÀ¸·Î ÁÖ¾îÁö´Â °æÇèµé¿¡ ´ëÇØ, ¿¹Àü Áö½ÄÀº ÃÖ´ëÇÑ ÀØÁö ¾ÊÀ¸¸é¼­ »õ·Î¿î Áö½ÄÀ» Áö¼ÓÀûÀ¸·Î ¹è¿ï ¼ö ÀÖ´Â ±â°è ÇнÀ ¸ðµ¨À» ¿¬±¸ÇÑ´Ù. º» °­ÀÇ¿¡¼­´Â Continual Learning ´ëÇ¥Àû Á¢±Ù¹ýÀÎ Regularization, Reply memory, Expansion ±â¹ÝÀÇ ¹æ¹ý·ÐµéÀ» »ìÆ캸°í, °­¿¬ÀÚÀÇ ¿¬±¸ ±×·ì¿¡¼­ ÃÖ±Ù¿¡ °³¹ßÇÑ Nonparametric BayesianÀ» ÀÌ¿ëÇÑ Task-free Continual Learning ¹æ¹ý·Ð°ú Multi-label ClassifationÀ» À§ÇÑ Continual Learning ¹æ¹ý·ÐÀ» ¼³¸íÇÑ´Ù

Bio:

2015 - ÇöÀç : ¼­¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2015 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2009 - 2013 : Carnegie Mellon University Àü»êÇÐ ¹Ú»ç

 

Machine Learning Theory
Meta-Learning
Ȳ¼ºÁÖ ±³¼ö (KAIST)

Abstract:

¸ÞŸÇнÀ, ȤÀº “Learning to Learn”Àº ±â°èÇнÀ ¸ðµ¨ÀÇ ÀϹÝÈ­ ¼º´ÉÀ» ³ôÀ̱â À§ÇÏ¿© ƯÁ¤ ŽºÅ©¿¡ ÇнÀÇÏ´Â °ÍÀÌ ¾Æ´Ñ ÇÑ Å½ºÅ©ÀÇ ºÐÆ÷¿¡ ÀϹÝÈ­Çϵµ·Ï ´Ù¾çÇÑ Å½ºÅ©¿¡ ´ëÇÏ¿© ÇнÀÇÏ´Â ¹æ¹ýÀ» ¸»ÇÑ´Ù. º» Æ©Å丮¾ó¿¡¼­´Â ¸Þ¸ð¸® ±â¹Ý ¸ÞŸÇнÀ, ¸ÞÆ®¸¯ ±â¹Ý ¸ÞŸÇнÀ, ±×·¹µð¾ðÆ® ±â¹Ý ¸ÞŸÇнÀ µî ´Ù¾çÇÑ ÃֽŠ¸ÞŸÇнÀ ¹æ¹ý·Ð°ú ÀÌÀÇ ¼Ò¼ö¼¦, RLµîÀÇ ÀÀ¿ë¿¡ÀÇ Àû¿ëÀ» ¾Ë¾Æº»´Ù.

Bio:

2020~ Ä«À̽ºÆ® ÀΰøÁö´É´ëÇпø/Àü»êÇкΠºÎ±³¼ö

2018~2019 Ä«À̽ºÆ® Àü»êÇкΠÁ¶±³¼ö

2014~2017 ¿ï»ê°úÇбâ¼ú¿ø ÀüÀÚÀü±âÄÄÇ»ÅÍ°øÇкΠÁ¶±³¼ö

2013~2014 µðÁî´Ï ¸®¼­Ä¡ ¹Ú»çÈÄ¿¬±¸¿ø

2013 University of Texas, Austin ¹Ú»ç Á¹¾÷

 

Computer Vision
Learning to Reconstruct Whole Expressive 3D Human Pose and Shape from Single Image
À̰湫 ±³¼ö (¼­¿ï´ëÇб³)

Abstract:

Recovering accurate 3D human pose and shape from images is a key component of human-centric computer vision applications. Although recent developments in deep learning have led to many technological advances, estimating the 3D pose and shape of the whole integrated body, hands, and face is difficult, and especially recovering those of multi-person from a single image in the wild remains a great challenge. In this talk, I will present the recent developments and results of my group on this problem.  First, I will introduce a new fully learning-based 3D multi-person body pose estimation framework, in which the relative distance of each person from the camera is efficiently estimated using the camera geometry and deep context features. Next, I will introduce the extension of this framework to 3D multi-person whole shape estimation. To estimate accurate mesh shape for each body part, a 3D positional pose-guided 3D rotational pose prediction network is proposed to utilize both joint-specific local and global features. The proposed framework integrates the 3D poses and shapes of the body/hands with a facial expression. Experimental results demonstrate how effectively our new methods work in real scenarios.

Bio:

Kyoung Mu Lee is a professor of the Dept. of ECE at Seoul National University. He has served and as an AEIC of the IEEE TPAMI, AE of CVIU, MVA, CVA, and the IEEE SPL. He is an Advisory Board Member of CVF (Computer Vision Foundation) and an Editorial Advisory Board Member for Academic Press/Elsevier. He has served as a general co-chair of ICCV2019, ACCV2018, and ACM MM2018. He was a Distinguished Lecturer of the Asia-Pacific Signal and Information Processing Association (APSIPA) for 2012-2013. He is an IEEE Fellow and a member of the Korean Academy of Science and Technology (KAST). More information can be found on his homepage http://cv.snu.ac.kr/kmlee.
 
 
Machine Learning Theory
Optimizaiton for ML
À±¼¼¿µ ±³¼ö (KAIST)

Abstract:

±â°èÇнÀÀº ÀϹÝÀûÀ¸·Î ¸ñÀûÇÔ¼ö¸¦ ¼³Á¤ÇÏ°í ±× ¸ñÀûÇÔ¼ö¸¦ ÃÖÀûÈ­¸¦ ÇÏ´Â °úÁ¤À¸·Î ÁøÇàµÈ´Ù. ÃÖÀûÈ­¸¦ À§ÇÏ¿© ¸¹Àº ¹æ¹ýÀÌ Á¦½ÃµÇ¾úÀ¸¸ç »óȲ¿¡ µû¶ó ´Ù¸¥ ¹æ¹ýµéÀ» »ç¿ëÇÏ¸ç ¼º´ÉÀ» ³ô¿©¿Ô´Ù. º» °­ÀÇ¿¡¼­´Â ÃÖ±Ù ±â°èÇнÀ ¾Ë°í¸®ÁòµéÀÌ »ç¿ëÇÏ´Â ÃÖÀûÈ­ ±â¹ýµéÀ» ÀÌÇØÇϱâ À§ÇÏ¿© convex ÇÔ¼ö ÃÖÀûÈ­¿¡¼­ºÎÅÍ ½ÃÀÛÇÏ¿© ±âÃÊÀûÀÎ ÃÖÀûÈ­ ÀÌ·ÐÀ» ¼³¸íÇÏ°í, À̸¦ ¹ÙÅÁÀ¸·Î ÃֽŠ±â¹ýµéÀ» ¼³¸íÇÑ´Ù. Gradient descent¿Í stochastic gradient descent ±â¹ÝÀÇ ADAM, RMSProp, µîÀÇ ¹æ¹ýÀÌ ¼Ò°³µÇ¸ç ºÐ»ê ÃÖÀûÈ­ ±â¹ý, gradient¸¦ »ç¿ëÇÏÁö ¾Ê´Â ÃÖÀûÈ­ ±â¹ýÀ» ¼Ò°³ÇÏ¸ç °­ÀǸ¦ ¸¶¹«¸®ÇÑ´Ù.

Bio:

Çö, KAIST AI ´ëÇпø Á¶±³¼ö (2017. 7~)
¹Ì±¹ ·Î½º¾Ë¶ó¸ð½º¿¬±¸¼Ò ¹Ú»çÈÄ¿¬±¸¿ø (2016. 4 ~ 2017.7)
¸¶ÀÌÅ©·Î¼ÒÇÁÆ® ¿¬±¸¼Ò (¿µ±¹ Ä·ºê¸®Áö) ¹æ¹®¿¬±¸¿ø (2015. 6~ 2016. 3)
¸¶ÀÌÅ©·Î¼ÒÇÁÆ®-INRIA ¿¬±¸¼Ò ¹Ú»çÈÄ¿¬±¸¿ø (2014. 4~ 2015. 4)
½º¿þµ§ ¿Õ¸³ °ø°ú´ëÇÐ KTH ¹Ú»çÈÄ¿¬±¸¿ø (2013. 2~2014. 3)

 

Machine Learning Theory

Normalizing Flows Based Deep Generative Models
±èµ¿¿ì ±³¼ö (POSTECH)

Abstract:

NFs offer an answer to a long-standing question in machine learning: How one can define faithful probabilistic models for complex high-dimensional data. NFs solve this problem by means of non-linear bijective mappings from simple distributions (e.g. multivariate normal) to the desired target distributions. These mappings are implemented with invertible neural networks and thus have high expressive power and can be trained by gradient descent in the usual way. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation thanks to bijectivity. This session will explain the theoretical underpinnings of NFs, show various practical implementation options, clarify their relationships with the other generative models such as GANs and VAEs.

Bio:

Dongwoo Kim is an assistant professor at the Department of Computer Science and Engineering, POSTECH. Prior to POSTECH, he worked as an assistant professor (lecturer) / research fellow at the Australian National University. He received his Ph.D. from KAIST in 2015, under the supervison of professor Alice Oh. Applications in his interests include machine learning and its implication on human understanding.

 
AI Application
Recent Advances in Machine Learning on Graphs
¹ÚÂù¿µ ±³¼ö (KAIST)

Abstract:

±×·¡ÇÁ´Â °³Ã¼µé °£ÀÇ ¿¬°á °ü°è¸¦ Ç¥ÇöÇÏ´Â µ¥ÀÌÅÍ ±¸Á¶·Î¼­ ½Ç»ýÈ°ÀÇ ´Ù¾çÇÑ Çö»óµéÀ» Ç¥ÇöÇϴµ¥ ³Î¸® »ç¿ëµÇ¸ç, ´ëÇ¥ÀûÀÎ ¿¹½Ã·Î´Â »ç¿ëÀÚÀÇ ¼Ò¼È ³×Æ®¿öÅ©, Áö½Ä ±×·¡ÇÁ, ºÐÀÚ ±¸Á¶ ±×·¡ÇÁ, ´Ü¹éÁú °£ ¹ÝÀÀ ±×·¡ÇÁ, À¯ÀüÀÚ ±×·¡ÇÁ µîÀ» µé ¼ö ÀÖ´Ù. ±×·¡ÇÁ ±â¹Ý ±â°èÇнÀ ¸ðµ¨ÀÇ ¼º´É Çâ»óÀ» À§Çؼ­´Â ±×·¡ÇÁÀÇ ±¸Á¶¸¦ °í·ÁÇØ node ¹× edgeÀÇ representationÀ» ÇнÀ Çϴ°ÍÀÌ ÇÙ½ÉÀ̸ç, À̸¦ À§ÇØ ÃÖ±Ù µö·¯´× ±â¼úÀÇ ¹ßÀü°ú ÇÔ²² ±×·¡ÇÁ ºÐ¼®À» À§ÇÑ ±â°èÇнÀ ±â¹ýÀÌ °¢±¤À» ¹Þ°í ÀÖ´Ù. º» °­¿¬¿¡¼­´Â Graph Representation LearningÀ» À§ÇÑ µö·¯´× ±â¹Ý ÃֽŠ±â¼ú ¹× ¿¬±¸ µ¿ÇâÀ» ¼Ò°³Çϸç, ±×·¡ÇÁ ±â°èÇнÀ ±â¼úÀÇ ÀÀ¿ëºÐ¾ß¿¡ ´ëÇØ ¼Ò°³ÇÑ´Ù. »Ó¸¸ ¾Æ´Ï¶ó, GNNÀÇ È¿°úÀûÀÎ ÇнÀÀ» À§ÇÑ self-supervised learning ±â¹ý¿¡ ´ëÇؼ­µµ ¼Ò°³ÇÑ´Ù.

Bio:

2020 - Çö Àç :  KAIST »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú Á¶±³¼ö
2019 - 2020 : University of Illinois at Urbana-Champaign ¹Ú»ç ÈÄ ¿¬±¸¿ø
2019 : POSTECH ÄÄÇ»ÅÍ°øÇÐ ¹Ú»ç