> ÇмúÇà»ç > ±¹³»Çмú´ëȸ
Abs:
Optimization plays a fundamental role in training neural networks in machine learning. In this tutorial, we will introduce the fundamentals of gradient- and stochastic gradient-based optimization algorithms. We will discuss the convergence analysis of SGD in the convex and non-convex setup; continuous-time analysis of SGD through ODEs and SDEs; and momentum- and non-momentum-based accelerations.
Bio:
Deep learning has produced significant breakthroughs in several fields such as computer vision, natural language processing, and speech recognition. Such research is based on neural networks designed for Euclidean-structured data such as images, text, or acoustic signals. In this tutorial, I will introduce geometric deep learning (GDL): a framework to generalize neural networks for non-Euclidean structured data such as graphs and manifolds. I will also introduce how GDL helps to solve drug discovery as a framework for handling molecular graph and protein 3D structures. In particular, I will briefly introduce using GDL for molecular property prediction, molecular design, retrosynthesis, and protein structure prediction (such as AlphaFold)
À̹ÌÁö ºÐ·ù ¸ðµ¨¿¡¼ ÆíÇâ ¹®Á¦´Â ÇØ´ç Ŭ·¡½ºÀÇ º»¿¬ÀÇ Æ¯Áú (¿¹: »õ Ŭ·¡½º¿¡ ´ëÇØ »õÀÇ ³¯°³, ºÎ¸® µî) ´ë½Å, ÇнÀ µ¥ÀÌÅÍ ³»¿¡¼ ³ôÀº ºóµµ·Î ÇÔ²² ¹ß»ýµÇ´Â ºÎÂ÷ÀûÀÎ ¼Ó¼ºµé (¿¹: »õ°¡ ³¯°í ÀÖ´Â ÆĶõ ÇÏ´Ã ¹è°æ, »õ°¡ ¾É¾Æ ÀÖ´Â ³ª¹« µî) ¿¡ ÀÇÁ¸ÇÏ¿© À̹ÌÁö ºÐ·ù¸¦ ¼öÇàÇÏ´Â ¹®Á¦¸¦ ÀǹÌÇÑ´Ù. º» Æ©Å丮¾ó¿¡¼´Â ÀÌ·¯ÇÑ À̹ÌÁö ÆíÇâ ¹®Á¦¸¦ ÇØ°áÇϱâ À§ÇÑ ´Ù¾çÇÑ ¹æ¹ý·ÐµéÀ» ¼Ò°³ÇÏ°í, ÇâÈÄ ¿¬±¸ ¹æÇâµéÀ» ³íÀÇÇÑ´Ù.