> ÇмúÇà»ç > ±¹³»Çмú´ëȸ
Abs:
Continual learning, especially class-incremental learning uses an episodic memory for past knowledge for better performance. Updating a model with the episodic memory is similar to (1) updating a model with past knowledge in the memory by a few-shot learning scheme, and (2) learning an imbalanced distribution of past data and the present data. We address the unrealistic factors in popular continual learning setups and propose a few ideas to make the continual learning research in realistic scenarios.
Bio:
Starting with AlexNet in 2012, convolutional neural network (CNN)-based deep learning architectures for image recognition such as VGG, GoogleNet, and ResNet have been proposed. These architectures serve as backbone models for solving various computer vision problems such as classification, object detection, and segmentation. Recently, a transformer-based model mainly used in natural language processing is being applied for computer vision. Unlike CNN, these transformer-based models are based on the attention mechanism, and have recently achieved good performance in various tasks. In this lecture, we will look at the evolution of these deep learning architectures and also cover the recently proposed transformer-based models.
Recently, diffusion-based generative models have shown impressive performances in the synthesized image quality. in image generation and translation. In particular, Text-to-Image Translation, which synthesizes high-quality images reflecting the semantic meanings of an input text. Diffusion models play a major role in making such significant progress. In this talk, I will present how diffusion models work in detail and discuss the future research directions.