Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× °­ÀǼҰ³
¢º Applied Math for ML
 
 
 
 
 
 
 
 
 
 
 °­»ç
  
½ÅºÀ±â ±³¼ö(ºÎ°æ´ëÇб³)
 
  ³»¿ë
   ±â°è ÇнÀÀº ¸î °¡Áö ±âÃÊÀû ¼öÇÐ Áö½ÄÀ¸·Î ½×¾Æ ¿Ã¸° Çй®À̶ó°í ÇÒ ¼ö ÀÖ´Ù. 
   °á±¹ ¾î¶² ¼ÒÇÁÆ®¿þ¾î µµ±¸¸¦ »ç¿ëÇÏ·Á°í ÇÏ°ÚÁö¸¸ ±× µµ±¸¸¦ Àß ÀÌÇØÇÏ°í °á°ú³ª
   ¹®Á¦Á¡À» ¼³¸íÇÒ ¼ö ÀÖ±â À§Çؼ­´Â ¾à°£ÀÇ ¼öÇÐÀû ±âÃÊ Áö½ÄÀÌ ÇÊ¿äÇÏ´Ù. 
   º» °­ÀÇ¿¡¼­´Â À̹ø ÀΰøÁö´É °Ü¿ï Çб³¿¡ Âü°¡ÇÏ´Â ¿©·¯ºÐµéÀÌ À̹̠°ú°Å ¹è¿ü°í 
   ´ëü·Î ¾Ë¸¸ÇÑ ³»¿ëµéÀ» Â¤¾îº¸°í µÇ»õ°Ü º¼ °ÍÀÌ´Ù.
 
  ¾à·Â
   1999 - Çö Àç : ºÎ°æ´ëÇб³ ITÀ¶ÇÕÀÀ¿ë°øÇаú ±³¼ö
   1991 - 1995 : KAIST Àü»êÇаú ¹Ú»ç
   1987 - 1991:  Çѱ¹Åë½ÅSW¿¬±¸¼Ò ¼±ÀÓ¿¬±¸¿ø
 
¢º Machine Learning Basics
 
 
 
 
 
 
 
 
  °­»ç
   ¼®ÈïÀÏ ±³¼ö(°í·Á´ëÇб³)
 
  ³»¿ë
   º» °­ÀÇ´Â ±â°èÇнÀÀÇ °³³ä ¹× ÇнÀÀÇ ±âº» À̷аú ¾Ë°í¸®Áò¿¡ ´ëÇÑ ¼Ò°³¸¦
   ¸ñÀûÀ¸·Î ÇÑ´Ù. ±¸Ã¼ÀûÀ¸·Î ±³»ç/ºñ±³»ç ÇнÀ °³³ä, º£ÀÌÁö¾È Åë°è ±â¹ýÀ» È°¿ëÇÑ       ÆĶó¹ÌÅÍ ÇнÀ, ¼±Çü ¸ðµ¨¿¡¼­ ºñ¼±Çü ¸ðµ¨·ÎÀÇ È®Àå µî¿¡ ´ëÇØ »ìÆ캻´Ù.
 
  ¾à·Â
   2015 - Çö Àç : °í·Á´ëÇб³ ³ú°øÇаú ºÎ±³¼ö/Á¶±³¼ö
   2012 - 2014 : Univ. of North Carolina at Chapel Hill Postdoc Fellow
   2012 : °í·Á´ëÇб³ ÄÄÇ»ÅÍ·ÀüÆÄÅë½Å°øÇаú °øÇÐ ¹Ú»ç
 
  ȨÆäÀÌÁö
   http://milab.korea.ac.kr
¢º Deep Feedforward Networks
 
 
 
 
 
 
 
 
 
 
 
 °­»ç
   Á¤¿øÁÖ ±³¼ö(°í·Á´ëÇб³)
 
  ³»¿ë
   º» °­ÀÇ¿¡¼­´Â µö·¯´×ÀÇ ±âº»¿¡ ÇØ´çÇÏ´Â Feedforward Neural Network (FNN)ÀÇ
   ¸ðµ¨°ú  FNNÀÇ Universal Approximation ¼ºÁú°ú FNN ÇнÀ¿¡ »ç¿ëµÇ´Â
   Back-Propagation ¾Ë°í¸®ÁòÀ» ¼³¸íÇÑ´Ù.
   ¶ÇÇÑ, Deep FNN ±¸Çö¿¡ ÀûÇÕÇÑ ´Ù¾çÇÑ activation ÇÔ¼ö¿Í  ½ÇÁ¦Àû ±¸Çö¿¡
   °í·ÁÇÒ Á¡µé¿¡ ´ëÇØ ¼³¸íÇÑ´Ù.
 
  ¾à·Â
   2008 - Çö Àç : °í·Á´ëÇб³ ÄÄÇ»ÅÍÇаú ±³¼ö
   2005 - 2007 : ¸íÁö´ëÇб³ Åë½Å°øÇаú Á¶±³¼ö
   2003 - 2005 : Dotcast Inc. Senior System Architect
   1998 - 2003 : Cornell University, ÀüÀÚ°øÇÐ ¹Ú»ç
 
¢º Regularization and Optimization in Training Deep Neural Networks
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
   ÃÖÁ¾Çö ±³¼ö(GIST)
 
  ³»¿ë
   As opposed to training conventional neural network, there are many
   thoretical and practical issues in formulating and training deep neural
   networks. We will cover regularization methods used in formulation
   an objective function to be trained a deep architecture and optimization
   techiniques to train a deep neural network
 
  ¾à·Â
   2018 - Çö Àç : GIST Àü±âÀüÀÚ°øÇкΠÁ¶±³¼ö
   2016 - 2018 : Allen Institue for Al, Reserch  Scientist
   2015 : University of Maryland, College Park Àü±â¹×ÄÄÇ»ÅÍ°øÇаú ¹Ú»ç
 
  È¨ÆäÀÌÁö
   http://umiacs.umd.edu/~jhchoi
 
¢º Convolutional Neural Networks
 
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
  
±èÀÎÁß ±³¼ö(Çѵ¿´ëÇб³)
 
  ³»¿ë 
   Convolutional Neural Networks(CNN)Àº ¿µ»ó󸮿¡ ¸Å¿ì °­·ÂÇÑ ¼º´ÉÀ»
   º¸ÀÏ »Ó ¾Æ´Ï¶ó ÀÚ¿¬¾î, À½¼ºÀνÄ/ÇÕ¼º¿¡µµ ³Î¸® »ç¿ëµÈ´Ù.
   º» °­ÀÇ¿¡¼­´Â CNNÀÇ ±âº» °³³ä°ú ÄÁº¼·ç¼Ç ¹× ¸µ ¿¬»êÀÇ ¿ªÇÒ,
   ¿ø¸® ¹× Á¾·ù¿¡ ´ëÇØ ¼³¸íÇÑ´Ù. ¶ÇÇÑ ÃÖ±Ù ºü¸£°Ô ¹ßÀüÇÏ°í ÀÖ´Â CNN ¸ðµ¨µé¿¡
   ´ëÇؼ­µµ ¼³¸íÇÑ´Ù.
 
  ¾à·Â
   2016 - Çö Àç : ¸Ó½Å·¯´×-½º¸¶Æ®Ä« ¼¾ÅÍÀå
   2006 - Çö Àç : Çѵ¿´ëÇб³ Àü»êÀüÀÚ°øÇкΠ±³¼ö/ÇкÎÀå
   2012 : U.C. Irvine ¹æ¹® ±³¼ö
   2001 - 2006 : (ÁÖ)ÀÎÁö¼ÒÇÁÆ® Ã¥ÀÓ¿¬±¸¿ø
   1990 - 2001 : KAIST Àü»êÇаú ¹Ú»ç
 
  ȨÆäÀÌÁö
   http://pro.handong.edu/callee/
 
¢º Recurrent Neural Networks
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
   Á¶¼ºÈ£ ±³¼ö(KAIST)
 
  ³»¿ë
   µö ·¯´× ¸ðµ¨ Áß¿¡ ½Ã°è¿­ µ¥ÀÌÅ͸¦ ó¸®ÇÏ´Â Recurrent Neural Network(RNN)
   ¸ðµ¨ °è¿­ Áß Æ¯È÷, Long-Shot Term Memory (LSTM)°ú Gated Recurrent
   Unit(GRU)¸¦ ºñ±³ ºÐ¼®Çϸ鼭 ¿Ö ¼º´ÉÀÌ ¿ì¼öÇÑÁö »ìÆ캻´Ù.
   ¶ÇÇÑ Attention modelµî ÀÌµé ¸ðµ¨¿¡ È®Àå Àû¿ëµÇ´Â ±â¼úÀ» »ìÆ캸°í
   ÃÖ±Ù À½¼º ÀνÄ, À̹ÌÁö ĸ¼Ç, Çൿ ÀÎ½Ä µî ´Ù¾çÇÑ ¿¬±¸ ºÐ¾ß¿¡¼­
   ½Ã°è¿­ µö·¯´× ¸ðµ¨ÀÌ ¾î¶»°Ô È°¿ëµÇ°í ÀÖ´ÂÁö ¾Ë¾Æº»´Ù.
 
  ¾à·Â
   2008 - Çö Àç :  KAIST Àü»êÇкΠÁ¶±³¼ö, ºÎ±³¼ö
   2006 - 2007 : MIT Media Lab ¹Ú»çÈÄ ¿¬±¸¿ø
   2006 : MIT ÀüÀÚÀü»êÇÐ ¹Ú»ç
   
  ȨÆäÀÌÁö
    http://nmail.kaist.ac.kr
 
¢º Deep Reinforcement Learning
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
   ¹ÚÁÖ¿µ ±³¼ö(°í·Á´ëÇб³)
 
  ³»¿ë
   ±íÀº °­È­ÇнÀ(Deep Reinforcement Learning)Àº Çö´ë ÀΰøÁö´É ±â¼ú Áß °¡Àå
   È°¹ßÇÑ ¿¬±¸°¡ ÀÌ·ç¾îÁö´Â ºÐ¾ß Áß Çϳª·Î½á, °­È­ÇнÀ, Á¦¾îÀÌ·Ð ¹× µö·¯´×
   ±â¼úÀÌ °áÇÕµÇ¾î ½Ã³ÊÁö È¿°ú¸¦ °ÅµÎ¸ç ±Þ¼ÓÇÑ ¹ßÀüÀ» ÀÌ·ç°í ÀÖ´Ù.
   º» °­Á¿¡¼­´Â ±íÀº °­È­ÇнÀ ±â¼úÀÇ °ú°Å¿Í ÇöÀ縦 ±¸¼ºÇÏ´Â ÁÖ¿ä ÁÖÁ¦ÀÎ
   Controlled Ito Process, Stochastic Optimal Control,
   Hamilton-Jacobi-Bellman Equation, Markov Decision Process,
   Deep Reinforcement Learning, AlphaGo Zero,
   Smoothed Bellman Embedding, K-Learning µîÀÇ °³³äÀ» »ìÆ캸°í,
   ÀÌ¿Í °ü·ÃÇÑ ¹Ì·¡ ±â¼úÀÇ ¹æÇâ¿¡ ´ëÇØ »ý°¢Çغ»´Ù.
 
  ¾à·Â
   1993 - Çö Àç : °í·Á´ëÇб³ Á¦¾î°èÃø°øÇаú ±³¼ö
   1992 : University of Texas at Austin Àü±â¹×ÄÄÇ»ÅÍ°øÇаú ¹Ú»ç
   1983 : ¼­¿ï´ëÇб³ Àü±â°øÇаú Çлç

  ȨÆäÀÌÁö
 
¢º Meta-learning for Few-shot Classification​
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
   ±è¼¼ÈÆ ¹Ú»ç(AITRICS)
 
  ³»¿ë
   º» °­ÀÇ¿¡¼­´Â ¸ÞŸ ÇнÀ ¹æ¹ý ±â¹ÝÀÇ ¼Ò¼ö¼¦ ºÐ·ù(few-shot classification)
   ÃֽŠ¾Ë°í¸®ÁòÀ» ¼Ò°³ÇÑ´Ù.
   ¸ÞŸ ÇнÀÀ̶õ ÇϳªÀÇ Å½ºÅ©¸¸À» ÇØ°áÇϱâ À§ÇÑ ÀϹÝÀûÀÎ ÁöµµÇнÀÀÇ
   Æз¯´ÙÀÓÀ» È®ÀåÇÏ¿©, ŽºÅ© ÀÚü¸¦ ÇØ°áÇÒ ¼ö ÀÖ´Â ¹æ¹ý·ÐÀ» ÀǹÌÇÑ´Ù.
   °íÀüÀûÀÎ ¼Ò¼ö¼¦ ºÐ·ù±â´Â ±â°èÇнÀ ¸ðµ¨À» ÀûÀýÇÏ°Ô Á¤±ÔÈ­(regularization)ÇÏ¿©
   ÀûÀº ¼öÀÇ ·¹ÀÌºí¿¡¼­ ±âÀÎÇÏ´Â ¸ðµ¨ °úÀûÇÕ ¹®Á¦¸¦ ÇØ°áÇÏ°íÀÚ ÇÏ¿´Áö¸¸
   ÃÖ±Ù¿¡´Â ¸ÞŸ ·¯´× ±â¹ÝÀÇ ¹æ¹ý·ÐÀÌ ¼Ò¼ö¼¦ ºÐ·ù¸¦ ÇØ°áÇϴµ¥ ¼º°øÀûÀ¸·Î
   Àû¿ëµÇ°í ÀÖ´Ù.
   º» °­ÀÇ¿¡¼­´Â MAML °è¿­ÀÇ ¸ÅÆ®¸¯ ÇнÀ ±â¹Ý/gradient ±â¹Ý ¹æ¹ý·ÐÀÇ
   ÃֽŠ¿¬±¸¸¦ ¼Ò°³ÇÑ´Ù.
   ±¸Ã¼ÀûÀ¸·Î, ¸ÅÆ®¸¯ ±â¹Ý ¹æ¹ý·Ð¿¡ ´ëÇؼ­´Â Prototypical Network¸¦ Æ÷ÇÔÇÏ¿©
   ÃÖ±Ù¿¡ ¿¬±¸µÈ Æ®·£½º´öƼºê(transductive) ¸ÞŸ ÇнÀ ¹æ¹ý·ÐÀ» ¼Ò°³ÇÑ´Ù.
   Gradient ±â¹Ý ¹æ¹ý·ÐÀº Model-agnostic Meta Learning (MAML) ¹× ÀÌ¿¡ ´ëÇÑ
   °¢Á¾ ÀÀ¿ë ¾Ë°í¸®ÁòÀ» ¼Ò°³ÇÑ´Ù.
 
  ¾à·Â
   2017-ÇöÀç, AITRICS ¼±ÀÓ ¿¬±¸¿ø
   2018, POSTECH ÄÄÇ»ÅÍ°øÇÐ ¹Ú»ç
   2009, POSTECH ÄÄÇ»ÅÍ°øÇÐ Çлç
 
  ȨÆäÀÌÁö
   https://saehoonkim.github.io/
 
¢º Deep Generative Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  °­»ç
   ¾çÀºÈ£ ±³¼ö(KAIST)
 
  ³»¿ë
   º» °­¿¬¿¡¼­´Â structured probabilistic models, monte carlo methods¿Í
   approximate inference¿¡ ´ëÇؼ­ °£·«ÇÏ°Ô ¸®ºäÇÏ°í, ÀÌ·¯ÇÑ Å×Å©´ÐµéÀ» ¹ÙÅÁÀ¸·Î
   ÇнÀµÇ´Â ´Ù¾çÇÑ ÇüÅÂÀÇ generative model¿¡ ´ëÇؼ­ »ìÆ캻´Ù. º¸´Ù ±íÀÌ ÀÖ´Â
   generative model¿¡ ´ëÇÑ ÀÌÇظ¦ Á¦°øÇϱâ À§ÇÏ¿©, ¸ðµ¨µéÀÌ °®´Â ÀÌÁúÀûÀΠ
   propertyµé¿¡ ´ëÇؼ­µµ »ìÆ캻´Ù. ÀÌµé ¸ðµ¨ÀÇ ÀϺο¡¼­´Â È®·ü ºÐÆ÷ ÇÔ¼ö¸¦
   ¸í½ÃÀûÀ¸·Î ³ªÅ¸³»´Â ¹Ý¸é, ´Ù¸¥ ÀϺο¡¼­´Â implicitÇÑ ¸ðµ¨·Î¼­ È®·ü ºÐÆ÷ÀÇ °ªÀº
   ¸í½ÃÀûÀ¸·Î ³ªÅ¸³ªÁö ¾ÊÁö¸¸, »ùÇà drawing°ú °°Àº task¸¦ Áö¿øÇÒ ¼ö ÀÖ´Ù.
   ȤÀº, ÀÌ ¸ðµ¨µé Áß ÀϺδ graphical model·Î ±â¼úµÇ´Â ¸ðµ¨ÀÎ ¹Ý¸é, ´Ù¸¥
   ¸ðµ¨µéÀº factor ±×·¡ÇÁ °üÁ¡¿¡¼­ ½±°Ô ±â¼úµÉ ¼ö´Â ¾øÁö¸¸, ¿©ÀüÈ÷ validÇÑ
   È®·ü ºÐÆ÷¸¦ ³ªÅ¸³»±âµµ ÇÑ´Ù.
 
  ¾à·Â
   2016 - ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö
   2014 - 2016 : IBM T.J. Watson Reserch Center. Research Staff Member
   2014 : University of Texas at Ausin ÄÄÇ»ÅÍ°úÇÐ ¹Ú»ç
 
  ȨÆäÀÌÁö
   http://mli.kaist.ac.kr