Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× °­ÀÇ ¼Ò°³
¢º Applied Maths for ML
°­»ç
½ÅºÀ±â ±³¼ö(ºÎ°æ´ëÇб³)

³»¿ë
±â°è ÇнÀÀº ¸î °¡Áö ±âÃÊÀû ¼öÇÐ Áö½ÄÀ¸·Î ½×¾Æ ¿Ã¸° Çй®À̶ó°í ÇÒ ¼ö ÀÖ´Ù. °á±¹ ¾î¶² ¼ÒÇÁÆ®¿þ¾î µµ±¸¸¦ »ç¿ëÇÏ·Á°í ÇÏ°ÚÁö¸¸ ±× µµ±¸¸¦ Àß ÀÌÇØÇÏ°í °á°ú³ª ¹®Á¦Á¡À» ¼³¸íÇÒ ¼ö ÀÖ±â À§Çؼ­´Â ¾à°£ÀÇ ¼öÇÐÀû ±âÃÊ Áö½ÄÀÌ ÇÊ¿äÇÏ´Ù. º» °­ÀÇ¿¡¼­´Â À̹ø ÀΰøÁö´É °Ü¿ï Çб³¿¡ Âü°¡ÇÏ´Â ¿©·¯ºÐµéÀÌ À̹̠°ú°Å ¹è¿ü°í ´ëü·Î ¾Ë¸¸ÇÑ ³»¿ëµéÀ» Â¤¾îº¸°í µÇ»õ°Ü º¼ °ÍÀÌ´Ù.

¾à·Â
1999 - ÇöÀç : ºÎ°æ´ëÇб³ ITÀ¶ÇÕÀÀ¿ë°øÇаú ±³¼ö
1991 - 1995 : KAIST Àü»êÇаú ¹Ú»ç
1987 - 1991 : Çѱ¹Åë½ÅSW¿¬±¸¼Ò ¼±ÀÓ¿¬±¸¿ø
¢º Machine Learning Basics
°­»ç
¾çÀºÈ£ ±³¼ö(KAIST)

³»¿ë
ÃÖ±Ù ¸Ó½Å ·¯´×ÀÇ ÇÑ Á¾·ùÀÎ µö ·¯´×ÀÌ ÄÄÇ»ÅÍ ºñÀü, ÀÚ¿¬¾î ó¸® µî ´Ù¾çÇÑ ºÐ¾ß¿¡¼­ Æø¹ßÀûÀÎ ¼º°øÀ» °ÅµÎ¸ç ¸¹Àº °ü½ÉÀ» ²ø°í ÀÖ´Ù. µö ·¯´×À» º¸´Ù Àß ÀÌÇØÇϱâ À§ÇÏ¿©, ¸Ó½Å ·¯´×ÀÇ ±âº» ¿ø¸®¸¦ È®½ÇÈ÷ ÀÌÇØÇÏ´Â °úÁ¤Àº ÇʼöÀûÀ̶ó°í ÇÒ ¼ö ÀÖ´Ù. º» °­¿¬¿¡¼­´Â, ±âº»ÀûÀÎ supervised/unsupervised ÇнÀ¿¡¼­ºÎÅÍ model capacity, regularization, stochastic gradient descent¿¡ À̸£±â±îÁö ¸Ó½Å ·¯´×ÀÇ ±âº»ÀûÀÎ ¿ø¸®¿¡ ´ëÇÏ¿© °£´ÜÇÏ°Ô »ìÆ캻´Ù.

¾à·Â
2016 - ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö
2014 - 2016 : IBM T.J. Watson Research Center, Research Staff Member
2014 : University of Texas at Austin ÄÄÇ»ÅÍ°úÇÐ ¹Ú»ç

ȨÆäÀÌÁö http://mli.kaist.ac.kr
¢º Deep Feedforward Networks
°­»ç
±èÀÎÁß ±³¼ö(Çѵ¿´ëÇб³)

³»¿ë
º» °­ÀÇ¿¡¼­´Â µö·¯´×ÀÇ ±âº»±â¿¡ ÇØ´çÇÏ´Â Deep Feedforward NetworkÀÇ ÁÖ¿ä °³³ä°ú µö·¯´×¿¡¼­ ¸Å¿ì ¸¹ÀÌ »ç¿ëµÇ´Â Back-Propagation ¾Ë°í¸®ÁòÀ» ¼³¸íÇÑ´Ù. ¶ÇÇÑ, ½Å°æ¸Á¿¡¼­ »ç¿ëµÇ´Â Hidden UnitµéÀ» ¼Ò°³ÇÏ°í, ½Å°æ¸ÁÀÇ ±¸Á¶¸¦ ¼³°èÇϴµ¥ °í·ÁÇÒ Á¡µé¿¡ ´ëÇØ ¼³¸íÇÑ´Ù.

¾à·Â
2016 - ÇöÀç : ¸Ó½Å·¯´×-½º¸¶Æ®Ä« ¼¾ÅÍÀå
2006 - ÇöÀç : Çѵ¿´ëÇб³ Àü»êÀüÀÚ°øÇкΠ±³¼ö/ÇкÎÀå
2012 : U.C. Irvine ¹æ¹® ±³¼ö
2001 - 2006 : (ÁÖ)ÀÎÁö¼ÒÇÁÆ® Ã¥ÀÓ¿¬±¸¿ø
1990 - 2001 : KAIST Àü»êÇаú ¹Ú»ç

ȨÆäÀÌÁö http://pro.handong.edu/callee/
¢º Regularization and Optimization for Deep Learning
°­»ç
Ȳ¼ºÁÖ ±³¼ö(KAIST)

³»¿ë
º» °­ÀÇ¿¡¼­´Â µö ³×Æ®¿öÅ©¿¡¼­ÀÇ °úÀûÇÕ (overfitting) ¹®Á¦¿Í À̸¦ ¹æÁöÇϱâ À§ÇÑ ¿©·¯ regularization ¹æ¹ý·Ð ¹× µö·¯´×ÀÇ ÃÖÀûÈ­ ¹®Á¦¿¡¼­ÀÇ Æ¯¼º, ±×¸®°í non-convex ÃÖÀûÈ­¿¡¼­ÀÇ ÃÖÀû Çظ¦ ã±â À§ÇÑ ¹æ¹ý·Ðµé¿¡ ´ëÇÏ¿© ¹è¿î´Ù.

¾à·Â
2018 - ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö
2014 - 2017 : UNIST ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2014 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2013 : Ph.D. in Computer Science, University of Texas, Austin, USA

ȨÆäÀÌÁö http://www.sungjuhwang.com
¢º Convolutional Neural Networks
°­»ç
Á¶¹Î¼ö (POSTECH)

³»¿ë
º» °­ÀÇ¿¡¼­´Â µö·¯´× ½Ã´ë¸¦ °¡´ÉÇÏ°Ô ÇÑ °¡Àå ´ëÇ¥ÀûÀÎ ¾ÆÅ°ÅØÃÄÀÎÀÌ ÇÕ¼º°ö ½Å°æ¸Á (convolutional neural network)À» ¼Ò°³ÇÏ°í, ÄÄÇ»ÅͺñÀü ºÐ¾ßÀÇ °ú°Å ÁÖ¿ä ¸ðµ¨µé°ú °ü·Ã Áö¾î ±× ¿ª»çÀû ¸Æ¶ôÀ» »ìÆ캸¸é¼­ ¼º°ø ¿øÀÎÀ»  ºÐ¼®Çغ»´Ù. ¶ÇÇÑ, ¿µ»ó ºÐ·ù (image classification) »Ó ¾Æ´Ï¶ó, ¹°Ã¼ ŽÁö (object detection), Àǹ̷ÐÀû ºÐÇÒ (semantic segmentation), ¿µ»ó Á¤ÇÕ (correspondence), ¿µ»ó ¹¦»ç (image captioning), ÁúÀÇ ÀÀ´ä (visual question answering) µîÀÇ ¹®Á¦µé¿¡ ¾î¶»°Ô »ç¿ëµÇ´ÂÁö »ìÆ캼 °ÍÀÌ´Ù.

¾à·Â
2016 - ÇöÀç : POSTECH ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2012 - 2016 : ÇÁ¶û½º ÄÄÇ»ÅÍ°úÇÐ ¿¬±¸¼Ò(INRIA)
Æĸ®°íµî»ç¹üÇб³ (ENS)¿¬±¸¿ø
2012 : ¼­¿ï´ë Àü±âÄÄÇ»ÅÍ°øÇкΠ¹Ú»ç

ȨÆäÀÌÁö http://cvlab.postech.ac.kr/~mcho/
¢º Recurrent Neural Networks
°­»ç
ÃÖÀç½Ä ±³¼ö(UNIST)

³»¿ë
½Ã°è¿­ µ¥ÀÌÅÍ ±â¹Ý µö·¯´× ¸ðµ¨Àº ±ÝÀ¶, ÀÇ·áµî ´Ù¾çÇÑ ÀÀ¿ë¿¡ »ç¿ëµÉ ¼ö ÀÖ´Ù. º» °­ÀÇ¿¡¼­´Â ½Ã°è¿­ µ¥ÀÌÅ͸¦ ºÐ¼®ÇÏ´Â Àΰø½Å°æ¸Á ¸ðµ¨ÀÎ Recurrent Neural Network(RNN), Long-Short Term Memory(LSTM)µîÀ» »ìÆ캸°í °ü·ÃµÈ ¹®Àå ¿ä¾à, ¸Þ¸ð¸® ³×Æ®¿öÅ©, À̹ÌÁö ¼³¸íµîÀÇ ÀÀ¿ëÀ» ¼Ò°³ÇÑ´Ù. ÀÌ·ÐÀûÀ¸·Î, RNNÀÇ Æ©¸µ ¸Ó½ÅÀ» Ç¥ÇöÇÒ ¼ö ÀÖ´Ù´Â °ÍÀ» ¿¹Á¦¸¦ ÅëÇؼ­ º¸ÀÌ°í, ¿Ö ¼±Çü ¸ðµ¨¿¡ ±â¹ÝÇÑ RNNÀÌ Àå°Å¸® »ó°ü°ü°è(long-term dependence)¸¦ Ç¥ÇöÇϱ⠾î·Á¿îÁöµµ ÇÔ²² »ìÆ캻´Ù.

¾à·Â
2017 - ÇöÀç: °úÇбâ¼úÁ¤º¸Åë½ÅºÎ/UNIST ¼³¸í°¡´ÉÀΰøÁö´É ¿¬±¸¼¾ÅÍÀå
2013 - ÇöÀç: UNIST Àü±âÄÄÇ»ÅÍ°øÇкΠºÎ±³¼ö
2013 : ·Î·»½º ¹öŬ¸® ¿¬±¸¼Ò ¹Ú»çÈÄ Æç·Î¿ì
2012 : Àϸ®³ëÀÌ ÁÖ¸³´ë Àü»êÇаú ¹Ú»ç / ¹Ú»ç ÈÄ ¿¬±¸¿ø

ȨÆäÀÌÁö http://sail.unist.ac.kr
¢º Deep Reinforcement Learning
°­»ç
¹ÚÁÖ¿µ ±³¼ö(°í·Á´ëÇб³)

³»¿ë
±íÀº °­È­ÇнÀ(Deep Reinforcement Learning)Àº Çö´ë ÀΰøÁö´É ±â¼ú Áß °¡Àå È°¹ßÇÑ ¿¬±¸°¡ ÀÌ·ç¾îÁö´Â ºÐ¾ß Áß Çϳª·Î½á, °­È­ÇнÀ, Á¦¾îÀÌ·Ð ¹× µö·¯´× ±â¼úÀÌ °áÇÕµÇ¾î ½Ã³ÊÁö È¿°ú¸¦ °ÅµÎ¸ç ±Þ¼ÓÇÑ ¹ßÀüÀ» ÀÌ·ç°í ÀÖ´Ù. º» °­Á¿¡¼­´Â ±íÀº °­È­ÇнÀ ±â¼úÀÇ °ú°Å¿Í ÇöÀ縦 ±¸¼ºÇÏ´Â ÁÖ¿ä ÁÖÁ¦ÀÎ Controlled Ito Process, Stochastic Optimal Control, Hamilton-Jacobi-Bellman Equation, Markov Decision Process, Reinforcement Learning, Deep earning, AlphaGo Zero µîÀÇ °³³äÀ» »ìÆ캸°í, ÀÌ¿Í °ü·ÃÇÑ ¹Ì·¡ ±â¼úÀÇ ¹æÇâ¿¡ ´ëÇØ »ý°¢Çغ»´Ù.

¾à·Â
1993 - ÇöÀç : °í·Á´ëÇб³ Á¦¾î°èÃø°øÇаú ±³¼ö
1992 : University of Texas at Austin Àü±â¹×ÄÄÇ»ÅÍ°øÇаú ¹Ú»ç
1983 : ¼­¿ï´ëÇб³ Àü±â°øÇаú Çлç

ȨÆäÀÌÁö http://sites.google.com/site/rbfpark3
¢º Hands-on Deep Learning with TensorFlow
°­»ç
Àå±æÁø ±³¼ö(°æºÏ´ë)

³»¿ë
TensorFlow´Â ¹Ì¸® ±¸ÇöµÈ ½ÉÈ­ÇнÀ ¾Ë°í¸®ÁòÀ» python ÀÎÅÍÆäÀ̽º¸¦ ±â¹ÝÀ¸·Î ½±°Ô »ç¿ëÇÒ ¼ö ÀÖµµ·Ï Á¦°øÇØ ÁÖ´Â µµ±¸·Î½á, Àΰø½Å°æ¸Á°ú ½ÉÈ­ÇнÀ¿¡ ´ëÇÑ ±íÀº Áö½ÄÀÇ ½ÀµæÀÌ ÇÊ¿ä¾øÀÌ ÀϹÝÀε鵵 ´Ù¾çÇÑ ÀΰøÁö´É ¹®Á¦¿¡ ½ÉÈ­ÇнÀÀ» Àû¿ëÇÏ´Â °ÍÀ» °¡´ÉÇÏ°Ô ÇÑ´Ù. ÃÖ±Ù¿¡ ³Î¸® »ç¿ëµÇ°í ÀÖ´Â ½Å°æ¸ÁµéÀÎ DNN (deep neural network), CNN(convolutional neural network), RNN (recurrent neural network), LSTM (long short-term memory) µîÀº built-in ÇÔ¼öµé·Î ±¸ÇöµÇ¾î ÀÖÀ¸¸ç, À̸¦ ±â¹ÝÀ¸·Î °³¹ßµÈ °ø°³¼ÒÇÁÆ®¿þ¾î(open-source software)µéÀ» È°¿ëÇÏ¸é ´ëºÎºÐÀÇ ÀΰøÁö´É ¹®Á¦µéÀÇ ÇØ°áÀÌ °¡´ÉÇÏ´Ù. º» °­ÀÇ¿¡¼­´Â TensorFlowÀÇ ±âÃʺÎÅÍ ½ÃÀÛÇÏ¿© MNIST¿Í CIFAR10 µîÀÇ °ø°³ÀÚ·áµéÀ» ÀνÄÇÏ´Â DNN°ú CNN ±¸Çö¹æ¹ýµé°ú ½ÇÁ¦ ÄÚµåµéÀ» ÀÚ¼¼È÷ ¼Ò°³Çϸç, ½Ã°è¿­ ÀڷḦ ó¸®Çϱâ À§ÇÑ RNN ±¸Çö¹æ¹ý°ú Äڵ嵵 ½ÇÁ¦ µ¥¸ð¿Í ÇÔ²² ¼Ò°³ÇÑ´Ù. °­Àdz»¿ëÀº tensorflow.org ÀÇ Æ©Å丮¾ó¿¡ ±â¹ÝÇϸç, GPU ÀÇ »ç¿ë¹æ¹ý ¹× È¿À²ÀûÀÎ TensorFlowÀÇ »ç¿ë¹ýµµ ¼Ò°³ÇÑ´Ù.

¾à·Â
2014 - ÇöÀç : °æºÏ´ëÇб³ ÀüÀÚ°øÇкΠÁ¶±³¼ö, ºÎ±³¼ö
2010 - 2013 : UNIST Àü±âÀüÀÚÄÄÇ»ÅÍ °øÇкΠÁ¶±³¼ö
2006 - 2009 : University of California, San Diego, ¹Ú»ç ÈÄ ¿¬±¸¿ø
2004 - 2006 : »ï¼ºÁ¾ÇÕ±â¼ú¿ø Àü¹®¿¬±¸¿ø
2004 : KAIST Àü»êÇаú ¹Ú»ç

ȨÆäÀÌÁö http://milab.knu.ac.kr/