¢º Deep Feedforward Networks
|
|
°»ç
Á¤¿øÁÖ ±³¼ö(°í·Á´ëÇб³)
³»¿ë
º» °ÀÇ¿¡¼´Â µö·¯´×ÀÇ ±âº»¿¡ ÇØ´çÇÏ´Â Feedforward Neural Network (FNN)ÀÇ
¸ðµ¨°ú FNNÀÇ Universal Approximation ¼ºÁú°ú FNN ÇнÀ¿¡ »ç¿ëµÇ´Â
Back-Propagation ¾Ë°í¸®ÁòÀ» ¼³¸íÇÑ´Ù.
¶ÇÇÑ, Deep FNN ±¸Çö¿¡ ÀûÇÕÇÑ ´Ù¾çÇÑ activation ÇÔ¼ö¿Í ½ÇÁ¦Àû ±¸Çö¿¡
°í·ÁÇÒ Á¡µé¿¡ ´ëÇØ ¼³¸íÇÑ´Ù.
¾à·Â
2008 - ÇöÀç : °í·Á´ëÇб³ ÄÄÇ»ÅÍÇаú ±³¼ö
2005 - 2007 : ¸íÁö´ë Åë½Å°øÇаú Á¶±³¼ö
2003 - 2005 : Dotcast Inc. Senior System Architect
1998 - 2003 : Cornell University, ÀüÀÚ°øÇÐ ¹Ú»ç
|
¢º Convolutional Networks
|
|
°»ç
Á¶¹Î¼ö ±³¼ö(POSTECH)
³»¿ë
º» °ÀÇ¿¡¼´Â µö·¯´× ½Ã´ë¸¦ °¡´ÉÇÏ°Ô ÇÑ °¡Àå ´ëÇ¥ÀûÀÎ ¾ÆÅ°ÅØÃÄÀÎÀÌ ÇÕ¼º°ö
½Å°æ¸Á(convolutional neural network)À» ¼Ò°³ÇÏ°í, ÄÄÇ»ÅͺñÀü ºÐ¾ßÀÇ
°ú°Å ÁÖ¿ä ¸ðµ¨µé°ú °ü·Ã Áö¾î ±× ¿ª»çÀû ¸Æ¶ôÀ» »ìÆ캸¸é¼ ¼º°ø ¿øÀÎÀ»
ºÐ¼®Çغ»´Ù. ¶ÇÇÑ, ¿µ»ó ºÐ·ù(image classification)»Ó ¾Æ´Ï¶ó,
¹°Ã¼ ŽÁö(object detection), Àǹ̷ÐÀû ºÐÇÒ(semantic segmentation),
¿µ»ó Á¤ÇÕ(correspondence), ¿µ»ó ¹¦»ç(image captioning),
ÁúÀÇ ÀÀ´ä(visual question answering)µîÀÇ ¹®Á¦µé¿¡ ¾î¶»°Ô »ç¿ëµÇ´ÂÁö
»ìÆ캼 °ÍÀÌ´Ù.
¾à·Â
2016 – ÇöÀç : POSTECH ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2012 - 2016 : ÇÁ¶û½º ÄÄÇ»ÅÍ°úÇÐ ¿¬±¸¼Ò(INRIA), Æĸ®°íµî»ç¹üÇб³ (ENS)¿¬±¸¿ø
2012 : ¼¿ï´ë Àü±âÄÄÇ»ÅÍ°øÇкΠ¹Ú»ç
|
¢º Recurrent and Recursive Nets
|
|
°»ç
ÃÖÀç½Ä ±³¼ö(UNIST)
³»¿ë
½Ã°è¿ µ¥ÀÌÅÍ ±â¹Ý µö·¯´× ¸ðµ¨Àº ±ÝÀ¶, ÀÇ·áµî ´Ù¾çÇÑ ÀÀ¿ë¿¡ »ç¿ëµÉ ¼ö ÀÖ´Ù.
º» °ÀÇ¿¡¼´Â ½Ã°è¿ µ¥ÀÌÅ͸¦ ºÐ¼®ÇÏ´Â Àΰø½Å°æ¸Á ¸ðµ¨ÀÎ
Recurrent Neural Network(RNN), Long-Short Term Memory(LSTM)µîÀ»
»ìÆ캸°í °ü·ÃµÈ ¹®Àå ¿ä¾à, ¸Þ¸ð¸® ³×Æ®¿öÅ©, À̹ÌÁö ¼³¸íµîÀÇ ÀÀ¿ëÀ»
¼Ò°³ÇÑ´Ù. ÀÌ·ÐÀûÀ¸·Î, RNNÀÇ Æ©¸µ ¸Ó½ÅÀ» Ç¥ÇöÇÒ ¼ö ÀÖ´Ù´Â °ÍÀ» ¿¹Á¦¸¦
ÅëÇؼ º¸ÀÌ°í, ¿Ö ¼±Çü ¸ðµ¨¿¡ ±â¹ÝÇÑ RNNÀÌ Àå°Å¸® »ó°ü°ü°è
(long-term dependence)¸¦ Ç¥ÇöÇϱ⠾î·Á¿îÁöµµ ÇÔ²² »ìÆ캻´Ù.
¾à·Â
2017 - ÇöÀç : °úÇбâ¼úÁ¤º¸Åë½ÅºÎ/UNIST ¼³¸í°¡´ÉÀΰøÁö´É ¿¬±¸¼¾ÅÍÀå
2013 - ÇöÀç : UNIST Àü±âÄÄÇ»ÅÍ°øÇкΠºÎ±³¼ö
2013 : ·Î·»½º ¹öŬ¸® ¿¬±¸¼Ò ¹Ú»çÈÄ Æç·Î¿ì
2012 : Àϸ®³ëÀÌ ÁÖ¸³´ë Àü»êÇаú ¹Ú»ç / ¹Ú»ç ÈÄ ¿¬±¸¿ø
|
¢º Structured Probabilistic Models for DL
|
|
°»ç
¾çÀºÈ£ ±³¼ö(KAIST)
³»¿ë
±¸Á¶ÈµÈ È®·ü ¸ðµ¨Àº ÀÚ¿¬¾î 󸮵î ÃÖ±Ù µö·¯´× ¿¬±¸ÀÇ ÇÙ½É ¿ä¼Ò ÁßÀÇ
Çϳª¶ó°í ÇÒ ¼ö ÀÖ´Ù. µö·¯´× ±â¹ÝÀÇ »ý¼º ¸ðµ¨ µî ÃֽŠµö ·¯´× ¾Ë°í¸®ÁòÀ»
º¸´Ù Àß ÀÌÇØÇϱâ À§ÇÏ¿©, ±¸Á¶ÈµÈ È®·ü ¸ðµ¨ÀÇ ±âº»À» Àß ÀÌÇØÇÏ´Â °úÁ¤Àº
ÇʼöÀûÀÌ´Ù. º» °¿¬¿¡¼´Â ±×·¡ÇÁ¸¦ »ç¿ëÇÏ¿© È®·ü ºÐÆ÷¿¡¼ ÀÓÀÇÀÇ º¯¼öÀÇ
»óÈ£ÀÛ¿ëÀ» ¸ðµ¨¸µÇÏ´Â È®·ü ±×·¡ÇÁ ¸ðµ¨¿¡ ´ëÇؼ »ìÆ캸°í ÃÖ±Ù µö·¯´×À»
ÀÌ¿ëÇÑ È®·ü ±×·¡ÇÁ ¸ðµ¨ÀÇ ¹ßÀü¿¡ ´ëÇؼ ³íÀÇÇÑ´Ù.
¾à·Â
2016 - ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö
2014 - 2016 : IBM T.J. Watson Research Center, Research Staff Member
2014 : University of Texas at Austin ÄÄÇ»ÅÍ°úÇÐ ¹Ú»ç
|
¢º Autoencoder and Representation Learning
|
|
°»ç
±èÀÎÁß ±³¼ö(Çѵ¿´ëÇб³)
³»¿ë
º» °ÀÇ¿¡¼´Â Autoencoder¿Í Representation Learning¿¡ ´ëÇØ °ÀÇÇÑ´Ù.
ÀÔ·ÂÆÐÅÏ À» ÀçÇöÇÏ´Â AutoencoderÀÇ ±âº» ±â´É»Ó ¾Æ´Ï¶ó Ư¡ÃßÃâ,
³ëÀÌÁî Á¦°Å µî ´Ù¾çÇÑ ±â´ÉÀ» ¼öÇàÇÏ´Â ¿©·¯ °¡Áö AutoencoderµéÀ» ¼Ò°³ÇÑ´Ù.
¶ÇÇÑ, Representation LearningÀÇ °³³ä°ú Á߿伺, ±×¸®°í µö·¯´×¿¡¼
ÀÌ·ç¾îÁö´Â Representation Learning¿¡ ´ëÇØ ¼³¸íÇÑ´Ù.
¾à·Â
2016 - ÇöÀç : ¸Ó½Å·¯´×-½º¸¶Æ®Ä« ¼¾ÅÍÀå
2006 - ÇöÀç : Çѵ¿´ëÇб³ Àü»êÀüÀÚ°øÇкΠ±³¼ö/ÇкÎÀå
2012 : U.C. Irvine ¹æ¹® ±³¼ö
2001 - 2006 : (ÁÖ)ÀÎÁö¼ÒÇÁÆ® Ã¥ÀÓ¿¬±¸¿ø
1990 - 2001 : KAIST Àü»êÇаú ¹Ú»ç
|
¢º Deep Generative Models
|
|
°»ç
ÃÖ½ÂÁø ±³¼ö(POSTECH)
³»¿ë
Deep learning has enjoyed its success over the last decade.
Most of emphasis was given to discriminative models,
with their applications to recognition problems.
On the other hand, generative models have played
a critical role in machine learning as well.
In this tutorial, I introduce recent advances in deep generative models,
such as variational autoencoders (VAEs)
and generative adversarial networks (GANs).
I will describe these models in the perspective of prescribed or implicit
models and will show some successful applications.
¾à·Â
2001 - ÇöÀç : POSTECH ÄÄÇ»ÅÍ°øÇаú ±³¼ö
1997 - 2000 : ÃæºÏ´ë Àü±âÀüÀÚ°øÇкΠÁ¶±³¼ö
1997 : ÀϺ» ÀÌÈÇבּ¸¼Ò Frontier Researcher
1996 : Ph.D., Electrical Engineering, University of Notre Dame
1989 : MS, ¼¿ï´ëÇб³ Àü±â°øÇаú
1987 : BS, ¼¿ï´ëÇб³ Àü±â°øÇаú
|
¢º Applications in Computer Vision
|
|
°»ç
±è°ÇÈñ ±³¼ö(¼¿ï´ëÇб³)
³»¿ë
º» °¿¬¿¡¼´Â ÄÄÇ»ÅÍ ºñÀü ºÐ¾ß¿¡¼ È°¹ßÈ÷ ¿¬±¸µÇ°í ÀÖ´Â ±íÀº
½Å°æ¸Á ¸ðµ¨ ±â¹ÝÀÇ few-shot learning ¹æ¹ý·ÐÀ» ´Ù·é´Ù.
±íÀº ½Å°æ¸Á ¸ðµ¨Àº ¾çÁúÀÇ ÇнÀµ¥ÀÌÅÍ°¡ ¸Å¿ì ¸¹ÀÌ ÇÊ¿äÇÏ´Ù´Â ÇÑ°è°¡ Àִµ¥,
¼Ò¼öÀÇ µ¥ÀÌÅÍ ¸¸À¸·Î »õ·Î¿î Ŭ·¡½º¸¦ ¹è¿ï ¼ö ÀÖ´Â few-shot learningÀÌ
´ë¾ÈÀ¸·Î½á È°¹ßÈ÷ ¿¬±¸ µÇ°í ÀÖÀ¸¸ç, º» °¿¬¿¡¼´Â ±âº»ÀûÀÎ °³³ä°ú
ÃֽŠ¿¬±¸¿¡ ´ëÇؼ ¼Ò°³ÇÑ´Ù.
¾à·Â
2015 - ÇöÀç : ¼¿ï´ëÇб³ °ø°ú´ëÇÐ ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2015 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2009 - 2013 : Carnegie Mellon University Àü»êÇÐ ¹Ú»ç
ȨÆäÀÌÁö
|
¢º Applications in Sequence Data Modeling
|
|
°»ç
Á¶¼ºÈ£ ±³¼ö(KAIST)
³»¿ë
µö ·¯´× ¸ðµ¨ Áß¿¡ ½Ã°è¿ µ¥ÀÌÅ͸¦ ó¸®ÇÏ´Â Recurrent Neural Network (RNN)
¸ðµ¨ °è¿ Áß Æ¯È÷, Long-Shot Term Memory (LSTM)°ú
Gated Recurrent Unit (GRU)¸¦ ºñ±³ ºÐ¼®ÇÏ¸é¼ ¿Ö ¼º´ÉÀÌ ¿ì¼öÇÑÁö »ìÆ캻´Ù.
¶ÇÇÑ Attention model µî ÀÌµé ¸ðµ¨¿¡ È®Àå Àû¿ëµÇ´Â ±â¼úÀ» »ìÆ캸°í
ÃÖ±Ù À½¼º ÀνÄ, À̹ÌÁö ĸ¼Ç, Çൿ ÀÎ½Ä µî ´Ù¾çÇÑ ¿¬±¸ ºÐ¾ß¿¡¼
½Ã°è¿ µö·¯´× ¸ðµ¨ÀÌ ¾î¶»°Ô È°¿ëµÇ°í ÀÖ´ÂÁö ¾Ë¾Æº»´Ù.
¾à·Â
2008 – ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö, ºÎ±³¼ö
2006-2007 : MIT Media Lab ¹Ú»çÈÄ ¿¬±¸¿ø
2006 : MIT ÀüÀÚÀü»êÇÐ ¹Ú»ç
ȨÆäÀÌÁö
|
¢º Applications in Healthcare
|
|
°»ç
Ȳ¼ºÁÖ ±³¼ö(KAIST)
³»¿ë
µö·¯´× µî ÃֽŠ±â°èÇнÀ¿¡ ±â¹ÝÇÑ ÀΰøÁö´ÉÀÌ ÃÖ±Ù ¿©·¯ ¹®Á¦¿¡¼ Àΰ£À»
´É°¡ÇÏ´Â ¼º´ÉÀ» º¸À̸ç, ÀÌÀÇ ÀÇ·á¿¡ÀÇ Àû¿ëÀÌ ´Ù¹æ¸éÀ¸·Î ½ÃµµµÇ°í ÀÖ´Ù.
ÇÏÁö¸¸, ÀÇ·á¶ó´Â ºÐ¾ßÀÇ Æ¯¼ö¼º ¶§¹®¿¡ ÀÇ·á ÀΰøÁö´ÉÀÇ ±¸Çö¿¡ ±âÁ¸ÀÇ
±â°èÇнÀ ¹æ¹ý·ÐÀ» ±×´ë·Î Àû¿ëÇÒ ¼ö ¾ø´Ù´Â ÇÑ°èÁ¡ÀÌ Á¸ÀçÇÑ´Ù.
º» °ÀÇ¿¡¼´Â ÀÇ·á¶ó´Â »õ·Î¿î µµ¸ÞÀÎÀÌ ÀΰøÁö´É¿¡ Á¦½ÃÇÏ´Â
¿©·¯ µµÀü°úÁ¦¿Í ÀÌÀÇ ±â°èÇнÀÀû ÇØ°á¹ý¿¡ ´ëÇÏ¿© ¼Ò°³ÇÑ´Ù.
¾à·Â
2018 - ÇöÀç : KAIST Àü»êÇкΠÁ¶±³¼ö
2014 - 2017 : UNIST ÄÄÇ»ÅÍ°øÇаú Á¶±³¼ö
2013 - 2014 : ¹Ú»çÈÄ ¿¬±¸¿ø, Disney Research
2013 : Ph.D. in Computer Science, University of Texas, Austin, USA
ȨÆäÀÌÁö
|