Prof. Ming-Hsuan Yang (UC Merced, Google)
Title: Leaning to Synthesize Image and Video contents
Abs: In this talk, I will first review our recent work on synthesizing image and video contents. The underlying theme is to exploit different priors to synthesize diverse content with robust formulations. I will then present our recent work on image synthesis, video synthesis, and frame interpolation. I will also present our recent work on learning to synthesize images with limited training data. When time allows, I will also discuss recent findings for other vision tasks.
Bio: Yang is a professor in Electrical Engineering and Computer Science at University of California, Merced and a research scientist at Google. He received the Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2000. He serves as a program co-chair for IEEE International Conference on Computer Vision (CVPR) in 2019 as well as Asian Conference on Computer Vision (ACCV) in 2014, and a general co-chair for Asian Conference on Computer Vision in 2016. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) from 2007 to 2011, International Journal of Computer Vision (IJCV), Computer Vision and Image Understanding (CVIU), Image and Vision Computing (IVC), and Journal of Artificial Intelligence Research (JAIR). He received numerous paper awards from CVPR, ACCV, and UIST. Yang received the Google faculty award in 2009, and Distinguished Early Career Research Award from the UC Merced senate in 2011, CAREER award from the National Science Foundation in 2012, and Distinguished Research Award from UC Merced Senate in 2015. He is a Fellow of the IEEE and ACM.
Title: Two surprises when optimization meets machine learning
Abs: It is well-known that there are large gaps between optimization theory and machine learning practice. However, there are two even more surprising gaps that have persisted at the fundamental level. The first one arises from ignoring the elephant in the room: non-differentiable non-convex optimization, e.g., when training a deep ReLU network. The second surprise is more disturbing: it uncovers a non-convergence phenomenon in the training of deep networks, and as a result it challenges existing convergence theory and training algorithms. Both these fundamental surprises open new directions of research, and I will talk about some of our theoretical progress on these, as well as potential research questions.
Bio: Suvrit Sra is an Associate Professor in the EECS at MIT, and a core faculty member of IDSS and LIDS at MIT, as well as a member of MIT-ML and Statistics groups. He obtained his PhD in Computer Science from the University of Texas at Austin. Before moving to MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. His research lies at the intersection of machine learning with mathematics, spanning areas such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization. He founded the Optimization for Machine Learning (OPT) series of workshops in 2008 (at NeurIPS). He is a co-founder and the chief scientist of macro-eyes, a global OR+OPT+ML+healthcare startup.
¼º³«È£ (³×À̹ö Ŭ·Î¹Ù Ã¥ÀÓ¸®´õ)
Title: ÃÊ´ë±Ô¸ð AI Ç÷§Æû°ú ÇâÈÄ Àü¸Á
Abs:
Scaling laws¸¦ µû¸£´Â ÃÊ´ë±Ô¸ð ÀΰøÁö´É ±â¼úÀÇ ÇöÁÖ¼Ò¿¡ ´ëÇØ ¼Ò°³ÇÏ°í, ±¹³» ÃÖÃÊÀÇ ÃÊ´ë±Ô¸ð Çѱ¹¾î ÀΰøÁö´É HyperCLOVA¸¦ CLOVA Studio¶ó´Â Ç÷§ÆûÀ» ¸¸µé¾î °¡¸ç ¾òÀº ÀλçÀÌÆ®¿¡ ´ëÇØ °øÀ¯ÇÕ´Ï´Ù.
Bio:
¼¿ï´ëÇб³ ÄÄÇ»ÅÍ°øÇаú Á¹¾÷
1999³â ~ 2017³â : ¿£¾¾¼ÒÇÁÆ®(ºÎÀå), ·¹µå´ö(µð·ºÅÍ), Çí½ºÇ÷º½º(CTO)
2017³â ~ ÇöÀç : ³×À̹ö Ŭ·Î¹Ù Ã¥ÀÓ¸®´õ/ÀÌ»ç