Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× °­ÀÇ ¼Ò°³
¢º Explicit Deep Generative Model
 
   ¿¬»ç: ¹®ÀÏö ±³¼ö(KAIST)
   Title: Explicit Deep Generative Model
 
Abstract :
Probabilistic graphical model (PGM) has been a key bridge in connecting the machine learning community to the statistics and probabilistic modeling communities, and its inference has been supported by the variational inference and the sampling based inference. After the blast of the neural network, PGMs have incorporated neural networks through the link of the variational inference, and one of such examples is the variational autoencoder, a.k.a. deep generative model. This tutorial will 1) briefly overview the inference of PGMs; 2) discuss the diverse developments of variational autoencoders; and 3) presents some applications of the explicit deep generative models.
 
 
  ¾à·Â
   2011 - Çö Àç: KAIST »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú ±³¼ö
   2008 - 2011: KAIST ¹Ú»çÈÄ ¿¬±¸¿ø
   2005 - 2008 Carnegie Mellon University Àü»êÇаú ¹Ú»ç 
  
ȨÆäÀÌÁö
    http://aai.kaist.ac.kr/xe2/
 
 
 
 
 
¢º Adversarial Robustness
 
 ¿¬»ç: ½ÅÁø¿ì ±³¼ö (KAIST)
 
 Title: Adversarial Robustness of Deep Neural Networks
              
Abstract: 
In the last many years, deep neural networks (DNNs) have achieved impressive results on various AI tasks, e.g., image classification, face/object recognition, semantic segmentation and playing games. The ground breaking success of DNNs has motivated their use in a broader range of domains, including more safety-critical environments such as medical imaging and autonomous driving. However, DNNs have been shown to be extremely brittle to carefully crafted small adversarial perturbations added to the input. These perturbations are imperceptible to human eyes, but have been intentionally optimized to cause miss-prediction. While the field has primarily focused on the development of new attacks and defenses, a `cat-and-mouse' game between attacker and defender has arisen. There has been a long list of proposed defenses to mitigate the effect of adversarial examples, which have shown that any defense mechanism that once looks successful could be circumvented with the invention of new attacks. In this lecture, I will survey the recent literature on the topic.
 
  ¾à·Â
   2013 - ÇöÀç: KAIST Àü±â¹×ÀüÀÚ°øÇкΠ¹× AI´ëÇпø ±³¼ö
   2012 - 2013: IBM T. J. Watson Research Center ¿¬±¸¿ø
   2010 - 2012: Georgia Institute of Technology ¿¬±¸¿ø
 
 È¨ÆäÀÌÁö
   http://alinlab.kaist.ac.kr/shin.html
 
 
 
¢º Set-input Neural Networks and Amortized Clustering
 
  ¿¬»ç: ÀÌÁÖÈ£ ¹Ú»ç (AITRICS)
 
  Title: Set-input Neural Networks and Amortized Clustering
 
Abstract:
Most of the neural networks typically used takes a fixed-length vector as input, while many real-world problems require taking sets of vectors as input. The example includes multiple instance learning, point-cloud classification, scene understanding, and few-shot image classification. This tutorial describes how to build a simple neural network to process sets (set-input neural network), how to train it, and how to apply it to real-world problems. In the latter part of the tutorial, an advanced set-input neural network is described. As an illustrative example, the amortized clustering problem is introduced, and how the advanced set-input neural network can solve this problem is discussed.
 
 
  ¾à·Â
   2019-ÇöÀç, AITRICS ¿¬±¸¿ø
   2018 - 2019, University of Oxford, ¹Ú»çÈÄ ¿¬±¸¿ø
   2011 - 2018, POSTECH Àü»ê°úÇйװøÇаú ¹Ú»ç.
   2011, POSTECH Àü»ê°úÇйװøÇаú Çлç
 
  È¨ÆäÀÌÁö
    https://juho-lee.github.io/
 
 
¢º Knowledge in Nueral NLP
 
¿¬»ç:Ȳ½Â¿ø ±³¼ö (¿¬¼¼´ëÇб³)
 
Title: Knowledge in Neural NLP
 
Abstract:
This talk surveys state-of-the-art NLP models injecting (or transferring) knowledge. Especially, our recent work injecting (diverse forms of) knowledge to meaningfully enhancing the accuracy and robustness will be highlighted. These models then have faced an inevitable question:  “Do you think BERT implicitly contains all these information already?” The last part of the talk presents our investigation to answer this question. 
 
  ¾à·Â
   2015 - ÇöÀç: ¿¬¼¼´ëÇб³ ±³¼ö
   2005 - 2015: POSTECH ±³¼ö
 
  ȨÆäÀÌÁö
 
 
 
¢º Pretrained Language Model
 
    ¿¬»ç: ±è°ÇÈñ ±³¼ö(¼­¿ï´ëÇб³)
Title: Pretrained Language Model
 
Abstract:
±Ù·¡ Bert, GPT, XLNet µî ¾ð¾î ¸ðµ¨·Î »çÀü ÇнÀµÈ µö·¯´× ¸ðµ¨µéÀÌ ¿©·¯ ÀÚ¿¬¾î ó¸® ¹®Á¦µé¿¡¼­ ÃÖ°íÀÇ ¼º´ÉÀ» º¸¿©ÁÖ°í ÀÖ´Ù. ÀÌ ¸ðµ¨µéÀº ¾ð¾î¸ðµ¨(Language Model)·Î½á ´ë¿ë·®ÀÇ ÇнÀµ¥ÀÌÅÍ·Î »çÀü ÇнÀÇÏ¿© ÄÁÅýºÆ®¸¦ °í·ÁÇÑ ÀÓº£µùÀ» ¹è¿î ÈÄ, ÁúÀÇ ÀÀ´ä, °¨Á¤ ºÐ¼® µî ¸ñÀû ÀÛ¾÷ÀÇ ÇнÀ µ¥ÀÌÅÍ¿¡ fine-tuningÇÏ¿© ¼º´ÉÀ» Çâ»ó½Ãų ¼ö ÀÕ´Ù. º» °­Á¿¡¼­´Â °¡Àå Áß¿äÇÑ Pretrained ¾ð¾î ¸ðµ¨ÀÎ BERT¿¡ ´ëÇØ ÀÚ¼¼È÷ ´Ù·ç°í, RoBERTa and ALBERTA µî °³¼±µÈ ÃֽŠ¸ðµ¨µé¿¡ ´ëÇØ »ìÆ캻´Ù.
 
 
 
¾à·Â
2015 - ÇöÀç : ¼­¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2015 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2009 - 2013 : Carnegie Mellon University Àü»êÇÐ ¹Ú»ç
 
   ȨÆäÀÌÁö
    http://vision.snu.ac.kr
 
 
 
¢º Meta-Learning for Few-shot Classification
 
¿¬»ç: ±è¼¼ÈÆ ¹Ú»ç (AITRICS)
 
 Title: Meta-Learning for Few-shot Classification  
Abstract:
º» °­ÀÇ¿¡¼­´Â ¸ÞŸ ÇнÀ ¹æ¹ý ±â¹ÝÀÇ ¼Ò¼ö¼¦ ºÐ·ù(few-shot classification) ÃֽŠ¾Ë°í¸®ÁòÀ» ¼Ò°³ÇÕ´Ï´Ù. ¸ÞŸ ÇнÀÀº ´ÜÀÏ ¹®Á¦¸¸À» ÇØ°áÇÒ ¼ö ÀÖ´Â ¸ðµ¨À» ÇнÀÇÏ´Â °ÍÀÌ ¾Æ´Ï¶ó, ¿¬°üµÈ ¿©·¯ ¹®Á¦µé¿¡ °øÅëÀûÀ¸·Î Àû¿ëµÇ´Â ¸ðµ¨À» »ý¼ºÇÏ¿© ÇнÀ ½Ã¿¡ Á¢ÇÏÁö ¸øÇÑ »õ·Î¿î ¹®Á¦µµ È¿°úÀûÀ¸·Î ÇØ°áÇÒ ¼ö ÀÖ´Â ¹æ¹ý·ÐÀ» ÀǹÌÇÕ´Ï´Ù. °íÀüÀûÀÎ ¼Ò¼ö¼¦ ºÐ·ù±â´Â ±â°èÇнÀ ¸ðµ¨À» ÀûÀýÇÏ°Ô Á¤±ÔÈ­(regularization)ÇÏ¿© ÀûÀº ¼öÀÇ ·¹ÀÌºí¿¡¼­ ±âÀÎÇϴ ¸ðµ¨ °úÀûÇÕ ¹®Á¦¸¦ ÇØ°áÇÏ°íÀÚ ÇÏ¿´Áö¸¸ ¸ÅĪ ³×Æ®¿öÅ© ¾Ë°í¸®ÁòÀÌ ¼Ò°³µÈ ÀÌÈÄ·Î ¸ÞŸ ÇнÀÀÌ ¼Ò¼ö¼¦ ºÐ·ù¸¦ ÇØ°áÇϴµ¥ ¼º°øÀûÀ¸·Î Àû¿ëµÇ°í ÀÖ½À´Ï´Ù. ¸ÞŸ ÇнÀÀ» ÅëÇÑ ¼Ò¼ö¼¦ ºÐ·ù´Â ¸ÅÆ®¸¯/±×·¡µð¾ðÆ® ±â¹Ý ¾Ë°í¸®ÁòÀ¸·Î ¼¼ºÐÈ­ÇÒ ¼ö Àֱ⠶§¹®¿¡,
º» °­ÀÇ¿¡¼­´Â ¼Ò¼ö¼¦ ºÐ·ù ÇØ°áÀ» À§ÇÑ ¸ÞŸ ÇнÀ ±âÃÊ, Prototypical ³×Æ®¿öÅ©·Î ´ëÇ¥µÇ´Â ¸ÅÆ®¸¯ ±â¹Ý ¾Ë°í¸®Áò, MAML·Î ´ëÇ¥µÇ´Â ±×·¡µð¾ðÆ® ±â¹Ý ¾Ë°í¸®Áò¿¡ °üÇÑ ÃֽŠ¿¬±¸ TitleÀ» ±¸Ã¼ÀûÀ¸·Î ¼Ò°³ÇÒ ¿¹Á¤ÀÔ´Ï´Ù.
 
  ¾à·Â
   2017-ÇöÀç, AITRICS ¿¬±¸ÆÀÀå
   2018, POSTECH ÄÄÇ»ÅÍ°øÇÐ ¹Ú»ç
   2009, POSTECH ÄÄÇ»ÅÍ°øÇÐ Çлç

  ȨÆäÀÌÁö
 
 
 
 
¢º Deep Generative Models
 
 ¿¬»ç: ¾çÀºÈ£ ±³¼ö(KAIST)
 Title: Deep Generative Models
 
Abstract:
º» °­¿¬¿¡¼­´Â structured probabilistic models, monte carlo methods¿Í approximate inference¿¡ ´ëÇؼ­ °£·«ÇÏ°Ô ¸®ºäÇÏ°í, ÀÌ·¯ÇÑ Å×Å©´ÐµéÀ» ¹ÙÅÁÀ¸·Î ÇнÀµÇ´Â ´Ù¾çÇÑ ÇüÅÂÀÇ generative model¿¡ ´ëÇؼ­ »ìÆ캻´Ù. VAE, GAN °ú °°Àº ÃÖ±Ù deep generative modelµé¿¡ ´ëÇؼ­ ¼Ò°³ÇÏ°í, generative modelÀ» º¸´Ù ±íÀÌ ÀÖ°Ô ÀÌÇØÇϱâ À§ÇÏ¿© ¸ðµ¨µéÀÌ °®´Â ÀÌÁúÀû propertyµé¿¡ ´ëÇؼ­µµ »ìÆ캻´Ù.
 
 
  ¾à·Â
   2019 - ÇöÀç : KAIST AI´ëÇпø/Àü»êÇкΠºÎ±³¼ö
  2016 - 2019 : KAIST Àü»êÇкΠÁ¶±³¼ö  
  2014 - 2016 : IBM T.J. Watson Research Center, Research Staff Member
  2014         : University of Texas at Austin ÄÄÇ»ÅÍ°úÇÐ ¹Ú»ç
 
 
  ȨÆäÀÌÁö
 
 
¢º Automated Machine Learning for Visual Domain
     ¿¬»ç: ÀÓ¼ººó ±³¼ö (UNIST)
 
Title: Automated Machine Learning for Visual Domain
 
Abstract: 
º» °­¿¬¿¡¼­´Â À̹ÌÁö µ¥ÀÌÅ͸¦ À§ÇÑ AutoML ¹æ¹ý·ÐÀ» ¼Ò°³ÇÕ´Ï´Ù. AutoML ¿¬±¸ÀÇ ÇÑ ÃàÀÎ Neural Architecture Search (NAS) ¸¦ ¼Ò°³ÇÏ°í, À̹ÌÁö µ¥ÀÌÅÍ¿¡ Àû¿ëµÈ ÃÖ±ÙÀÇ NAS ¿¬±¸µéÀ» »ìÆ캼 ¿¹Á¤ÀÔ´Ï´Ù. ¾Æ¿ï·¯ Automated Augmentation µµ °°ÀÌ ¼Ò°³ÇÒ ¿¹Á¤ÀÔ´Ï´Ù.
 
 
 
 
  ¾à·Â
   2020-ÇöÀç, UNIST 
  2018-2020, Ä«Ä«¿Àºê·¹ÀÎ
  2016-2017 »ï¼ºÈ­Àç
  2010-2016, °í·Á´ëÇб³ ¼öÇаú ¹Ú»ç
   
ȨÆäÀÌÁö
 
 
¢º Human-in-the-Loop Generative Models
 
 ¿¬»ç: ÁÖÀç°É ±³¼ö(°í·Á´ëÇб³)
 
 Title: Human-in-the-Loop Generative Models

Abstract:
Generative adversarial networks ÀÇ ¹ßÀü¿¡ ÈûÀÔ¾î, À̹ÌÁö, ÅؽºÆ® µîÀÇ µ¥ÀÌÅ͸¦ »ý¼º ¹× º¯È¯ÇÏ´Â µö·¯´× ±â¼úÀº Á¡Á¡ ¹ßÀüÇÏ°í ÀÖ´Ù. ÀÌ·¯ÇÑ ±â¼úÀÌ ÄÁÅÙÃ÷ Á¦ÀÛ µîÀÇ ´Ù¾çÇÑ ºÐ¾ß¿¡ ½ÇÁ¦·Î Àû¿ëµÇ¸é¼­, »ç¿ëÀÚÀÇ ´Ù¾çÇÑ ¿ä±¸ »çÇ×À» ¹Ý¿µÇÒ ¼ö ÀÖ´Â Áøº¸µÈ µö·¯´× ±â¼ú ¹× È¿°úÀûÀÎ »ç¿ëÀÚ ÀÎÅÍÆäÀ̽ºµµ ÃÖ±Ù È°¹ßÈ÷ ¿¬±¸µÇ°í ÀÖ´Ù. º» °­ÀÇ´Â À̹ÌÁö µ¥ÀÌÅÍÀÇ »ý¼º ¹× º¯È¯ ŽºÅ©¸¦ Áß½ÉÀ¸·Î ÀÌ·¯ÇÑ 뱡ÇâÀÇ ÃÖ±Ù ¿¬±¸ »ç·Ê ¹× µ¿ÇâÀ» ¼Ò°³ÇÑ´Ù. 
 
  ¾à·Â
   2015 - Çö Àç: °í·Á´ëÇб³ ±³¼ö
   2013: Georgia Institute of Technology Àü»ê°úÇÐ ¹× °øÇаú ¹Ú»ç
   2009: Georgia Institute of Technology Àü»ê°úÇÐ ¹× °øÇаú ¼®»ç
 
 È¨ÆäÀÌÁö
   http://davian.korea.ac.kr