> 학술행사 > 국내학술대회
Abs: In this talk, I will first review our recent work on synthesizing image and video contents. The underlying theme is to exploit different priors to synthesize diverse content with robust formulations. I will then present our recent work on image synthesis, video synthesis, and frame interpolation. I will also present our recent work on learning to synthesize images with limited training data. When time allows, I will also discuss recent findings for other vision tasks.
Bio: Yang is a professor in Electrical Engineering and Computer Science at University of California, Merced and a research scientist at Google. He received the Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2000. He serves as a program co-chair for IEEE International Conference on Computer Vision (CVPR) in 2019 as well as Asian Conference on Computer Vision (ACCV) in 2014, and a general co-chair for Asian Conference on Computer Vision in 2016. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) from 2007 to 2011, International Journal of Computer Vision (IJCV), Computer Vision and Image Understanding (CVIU), Image and Vision Computing (IVC), and Journal of Artificial Intelligence Research (JAIR). He received numerous paper awards from CVPR, ACCV, and UIST. Yang received the Google faculty award in 2009, and Distinguished Early Career Research Award from the UC Merced senate in 2011, CAREER award from the National Science Foundation in 2012, and Distinguished Research Award from UC Merced Senate in 2015. He is a Fellow of the IEEE and ACM.
Abs: It is well-known that there are large gaps between optimization theory and machine learning practice. However, there are two even more surprising gaps that have persisted at the fundamental level. The first one arises from ignoring the elephant in the room: non-differentiable non-convex optimization, e.g., when training a deep ReLU network. The second surprise is more disturbing: it uncovers a non-convergence phenomenon in the training of deep networks, and as a result it challenges existing convergence theory and training algorithms. Both these fundamental surprises open new directions of research, and I will talk about some of our theoretical progress on these, as well as potential research questions.
Bio: Suvrit Sra is an Associate Professor in the EECS at MIT, and a core faculty member of IDSS and LIDS at MIT, as well as a member of MIT-ML and Statistics groups. He obtained his PhD in Computer Science from the University of Texas at Austin. Before moving to MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. His research lies at the intersection of machine learning with mathematics, spanning areas such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization. He founded the Optimization for Machine Learning (OPT) series of workshops in 2008 (at NeurIPS). He is a co-founder and the chief scientist of macro-eyes, a global OR+OPT+ML+healthcare startup.
Scaling laws를 따르는 초대규모 인공지능 기술의 현주소에 대해 소개하고, 국내 최초의 초대규모 한국어 인공지능 HyperCLOVA를 CLOVA Studio라는 플랫폼을 만들어 가며 얻은 인사이트에 대해 공유합니다.