> 학술행사 > 국내학술대회
Abs: In recent years, deep learning has progressed tremendously for many fields of AI, such as visual perception, speech recognition, language understanding, and robotics. However, many of these methods require large amounts of supervision and do not generalize well for unseen tasks. We still have a long way ahead toward developing a general-purpose artificial intelligence agent that can perform many useful tasks with high sample efficiency and strong generalization abilities to previously unseen tasks. I'll present my recent work on tackling these challenges. First, we'll present several methods for learning representations and models of the environment that can improve exploration, sample efficiency, and generalization performance of the agents. For example, we'll show that learning representations of the controllable aspects of the environment dynamics leads to improved exploration for sparse reward tasks. Further, we'll present methods for learning latent representations from environment dynamics, which improve sample efficiency and generalization performance in various control tasks. Finally, I'll present a method for solving complex tasks with hierarchical compositional dependencies between sub-tasks. Specifically, we'll propose and address a novel few-shot reinforcement learning problem, where a task is characterized by a sub-task graph that describes a set of sub-tasks and their dependencies that are unknown to the agent. Instead of directly learning a black-box meta-policy, we develop a meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. Our experiment results on grid-world domains and game environments with compositional tasks show that the proposed method can accurately infer the latent task structures and adapt more efficiently than prior methods. We'll further present a method for transferring from prior task structures learned in training time to unseen novel tasks in test time, which leads to an order-of-magnitude gain in sample efficiency.
Bio: Honglak Lee is currently a Senior Vice President and Chief Scientist of Artificial Intelligence at LG AI Research and an Associate Professor of Computer Science at the University of Michigan, Ann Arbor. Previously he worked as a Research Scientist at Google Research, Brain Team. He received his Ph.D. from Computer Science Department at Stanford University in 2010, advised by Prof. Andrew Ng. His research focuses on deep learning and representation learning, which spans over unsupervised and semi-supervised learning, supervised learning, transfer learning, reinforcement learning, structured prediction, graphical models, and optimization. His methods have been successfully applied to computer vision and other perception problems. He received best paper awards at ICML and CEAS. He has served as a guest editor of IEEE TPAMI Special Issue on Learning Deep Architectures, as well as area chairs of ICML, NIPS, CVPR, ICCV, ECCV, AAAI, IJCAI, and ICLR. He received the Google Faculty Research Award (2011), NSF CAREER Award (2015), and was selected as one of AI's 10 to Watch by IEEE Intelligent Systems (2013) and a research fellow by Alfred P. Sloan Foundation (2016).
Abs: Optimization of many deep learning hyperparameters can be formulated as a bilevel optimization problem. While most black-box and gradient-based approaches require many independent training runs, we aim to adapt hyperparameters online as the network trains. The main challenge is to approximate the response Jacobian, which captures how the minimum of the inner objective changes as the hyperparameters are perturbed. To do this, we introduce the self-tuning network (STN), which fits a hypernetwork to approximate the best response function in the vicinity of the current hyperparameters. Differentiating through the hypernetwork lets us efficiently approximate the gradient of the validation loss with respect to the hyperparameters. We train the hypernetwork and hyperparameters jointly. Empirically, we can find hyperparameter settings competitive with Bayesian Optimization in a single run of training, and in some cases find hyperparameter schedules that outperform any fixed hyperparameter value.
Bio: Roger Grosse is an Assistant Professor of Computer Science at the University of Toronto and a member of the Vector institute. Previously, he received his Ph.D. in Computer Science from MIT, studying with Bill Freeman and Joshua Tenenbaum, and then was a Postdoctoral Fellow at the University of Toronto, working with Ruslan Salakhutdinov. He has won multiple paper awards at top machine learning conferences. He holds a Canada Research Chair in Probabilistic Inference and Deep Learning, as well as the Sloan Fellowship.
Abs: Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We'll explain why this is important and show some simple examples of uncertainty propagation in practice.
Bio: Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.
Kyunghoon Bae is the first president of the LG AI Research Institute which was launched in December last year. He has been leading LG AI Research to establish medium and long-term AI strategies of LG group, discover AI business models, and create synergy models among LG subsidiaries. He received his Ph.D. from computer vision engineering in 2006. He joined the LG Group in 2016 and served as the head of AI research at LG Economic Research Institute, LG U+, and LG Science Park. He has successfully solved practical problems in all areas of AI needed. He is creating a top-notch research organization by winning SQuAD, the CVPR Continuous Learning Contest, and multiple paper awards at top AI conferences.
Abs: The role and future of artificial intelligence that we anticipate will be addressed. Major issues of discussion include :
Bio: Chang Kyung Kim has been a professor at the Dept. of Materials Science and Engineering of the Hanyang University since 1997. He is a material scientist by training. He holds Ph.D. in Materials Science and Engineering from MIT and Master's and Bachelor's degree in Metallurgical Engineering both from the Seoul National University. He just resumed his faculty position after a couple of governmental services including Vice Minister of the Ministry of Education, Science and Technology from August 2010 to June 2012 and Science Secretary to the President at the Office of the President from Feb. 2008 to Feb. 2009. For his excellent services in these positions, he was awarded the order of a Great Leap in S&T and the order of Service Merit (Yellow) in 2010 and 2013, respectively.
Abs: Contrastive self-supervised learning (CSSL) has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations. In this talk, I will introduce two techniques for improving the generalization performance of CSSL on the downstream tasks. The first one was inspired by the observation that augmentation-invariance by CSSL could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. To tackle the issue, we suggest to optimize an auxiliary self-supervised loss so that it encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. The second one was inspired by the observation the learned representations by CSSL are often biased to the spurious scene correlations of different objects (i.e., contextual bias) or object and background (i.e., background bias). To tackle the issue, we develop a novel object-aware CSSL framework that first (a) localizes objects in a self-supervised manner and then (b) debias scene correlations via appropriate data augmentations considering the inferred object locations. This is a joint work with Hankook Lee (KAIST), Sangwoo Mo (KAIST), Hyunwoo Kang (KAIST), Kibok Lee (Amazon), Kimin Lee (UC Berkeley) and Honglak Lee (U. Michigan).
Bio: Jinwoo Shin is currently an associate professor (jointly affiliated) in Kim Jaechul Graduate School of AI and the School of Electrical Engineering at KAIST. He is also a KAIST endowed chair professor. He obtained B.S. degrees (in Math and CS) from Seoul National University in 2001, and the Ph.D. degree (in Math) from Massachusetts Institute of Technology in 2010 with George M. Sprowls Award (for best MIT CS PhD theses). He was a postdoctoral researcher at Algorithms & Randomness Center, Georgia Institute of Technology in 2010-2012 and Business Analytics and Mathematical Sciences Department, IBM T. J. Watson Research in 2012-2013. Dr. Shin's early works are mostly on applied probability and theoretical computer science. After he joined KAIST in Fall 2013, he started to work on the algorithmic foundations of machine learning. He received numerous awards including the Rising Star Award in 2015 from the Association for Computing Machinery (ACM) Special Interest Group for the computer systems performance evaluation community (SIGMETRICS).