Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

Æ©Å丮¾ó
 
ÃÖÀ±Àç ±³¼ö(KAIST)
 
Title: Developing a Medical LLM using synthetic medical record
 
Abs:
º» Æ©Å丮¾ó¿¡¼­´Â ÀÇ·á µµ¸ÞÀο¡ ƯȭµÈ ´ëÇü¾ð¾î¸ðµ¨ (LLM)À» ÇнÀÇÏ´Â °úÁ¤À» ¼³¸íÇÕ´Ï´Ù. ¸ÕÀú LLMÀ» ÀÌÇØÇϱâ À§ÇÑ ¹è°æÁö½Ä°ú ±â¼úÀûÀÎ °³³äÀ» ¼³¸íÇÏ°í, ÀÇ·á µµ¸ÞÀο¡ ƯȭµÈ LLM¿¡ ´ëÇؼ­ ¼³¸íÇϸ鼭 ÇÕ¼º µ¥ÀÌÅÍÀÇ Çʿ伺¿¡ ´ëÇؼ­ ³íÀÇÇÕ´Ï´Ù. ±× ÈÄ¿¡ ÇÕ¼º µ¥ÀÌÅ͸¦ ÀÌ¿ëÇÑ LLM ÇнÀ °úÁ¤¿¡ ´ëÇؼ­ ¼³¸íÇÏ°í, ÃÖÁ¾ÀûÀ¸·Î LLMÀÇ Æò°¡¿¡ ´ëÇؼ­ ¼Ò°³Çϸ鼭 Æ©Å丮¾óÀ» ¸¶¹«¸®ÇÏ°Ú½À´Ï´Ù. 
 
Bio
- ÇзÂ
 . ¹Ú»ç, 2018³â 8¿ù: Georgia Tech, Computer Science
 . ¼®»ç, 2009³â 8¿ù: KAIST, Àü»êÇаú
 . Çлç, 2007³â 8¿ù: ¼­¿ï´ëÇб³, ÄÄÇ»ÅÍ°øÇаú
- °æ·Â
 . 2020³â 3¿ù - ÇöÀç: KAIST AI ´ëÇпø, Á¶±³¼ö
 . 2018³â 9¿ù – 2020³â 2¿ù: Google Brain, Software Engineer
 . 2017³â 2¿ù – 2017³â 8¿ù: DeepMind, Google Research Intern
 . 2010³â 2¿ù – 2014³â 4¿ù: Çѱ¹ÀüÀÚÅë½Å¿¬±¸¿ø (ETRI), ¿¬±¸¿ø
 

 
ÃÖÁ¾¿ø ±³¼ö(Áß¾Ó´ëÇб³)
 
Title: Recent Trends in Deepfake Image and Video Detection
 
Abs:
º» °­ÀÇ¿¡¼­´Â µöÆäÀÌÅ© À̹ÌÁö¿Í ºñµð¿À ŽÁöÀÇ ±âº» ¿ø¸®¸¦ ¼³¸íÇÏ°í, ÃÖ±Ù ¹ßÀüµÈ µöÆäÀÌÅ© »ý¼º ¸ðµ¨¿¡ ´ëÀÀÇϱâ À§ÇÑ ÃֽŠ¿¬±¸¸¦ »ìÆ캾´Ï´Ù. ƯÈ÷, ÇнÀ¿¡ ¾²ÀÌÁö ¾Ê°Å³ª »õ·Ó°Ô ÃâÇöÇÑ µöÆäÀÌÅ© »ý¼º ¸ðµ¨·Î ¸¸µç »õ·Î¿î µöÆäÀÌÅ© À̹ÌÁö¿¡ ´ëÇؼ­µµ °­ÀÎÇÏ°Ô µöÆäÀÌÅ© ŽÁö¸¦ ¼º°øÇÒ ¼ö ÀÖ´Â ¹æ¹ý·ÐµéÀ» ÁßÁ¡ÀûÀ¸·Î ¾Ë¾Æº¸¸ç, À̸¦ ÅëÇØ µöÆäÀÌÅ© ŽÁö¸¦ À§ÇÑ ¿¬±¸ ¹æ½ÄÀ» ÀÌÇØÇÏ°í ´õ ³ª¾Æ°¡ ÇöÀç »ý¼º ¸ðµ¨µéÀÌ °®°í ÀÖ´Â ÇÑ°èÁ¡¿¡ ´ëÇؼ­ ³íÀÇÇÕ´Ï´Ù. ´õºÒ¾î ÃÖ±Ù È­µÎ°¡ µÇ°í ÀÖ´Â µöÆäÀÌÅ© ¿µ»óÀ¸·Î ÀÎÇÑ ¹üÁËÀÇ ½É°¢¼ºÀ» ÀÌÇØÇÏ°í, µöÆäÀÌÅ© ¹üÁË ¹®Á¦¸¦ ¹Ì¿¬¿¡ ¹æÁöÇÏ¿© »ý¼º ¸ðµ¨ÀÇ ¹ßÀüÀ» ´õ¿í °¡¼ÓÈ­ÇÒ ¹æ¾È¿¡ ´ëÇؼ­ »ý°¢ÇÒ ±âȸ¸¦ °®½À´Ï´Ù.
 
Bio
ÇÐ/¼®»ç: KAIST Àü±âÀüÀÚ°øÇаú
¹Ú»ç: ¼­¿ï´ëÇб³ Àü±âÁ¤º¸°øÇкÎ
(Àü) ¿µ±¹ ÀÓÆ丮¾óÄø®Áö·±´ø ¹æ¹®¿¬±¸¿ø (2016, 2017)
(Àü) »ï¼ºSDS ÀΰøÁö´É¿¬±¸¼¾ÅÍ Ã¥ÀÓ¿¬±¸¿ø (2018~2020)
(Àü) ¼±°Å°ü¸®À§¿øȸ °¡Â¥´º½º ÀÚ¹®À§¿ø (2024)
(Çö) Áß¾Ó´ëÇб³ ÷´Ü¿µ»ó´ëÇпø ºÎ±³¼ö (2020~ÇöÀç)
(Çö) Áß¾Ó´ëÇб³ ÷´Ü¿µ»ó´ëÇпø °øÇÐÀü°øÁÖÀÓ (2023~ÇöÀç)


 
À¯¿µÀç ±³¼ö(¿¬¼¼´ëÇб³)
 
Title: Multimodal Data Curation for Commonsense Reasoning: From Web, Simulation to Real-World Applications​ 
 
Abs

Human learning is inherently multimodal, encompassing observation, listening, reading, and communication to understand and learn from our environment. Significant advancements in machine learning fields relevant to these multimodal interactions, such as Speech Recognition and Computer Vision, have enabled the computational modeling of this innate learning process. Multimodal commonsense reasoning on massive web videos closely mirrors this approach. In this presentation, I will discuss my recent work on curating multimodal datasets and developing Multimodal LLMs. Specifically, I will focus on foundational models that integrate the training of various tasks in vision and language understanding. Additionally, I will extend this work to multimodal commonsense reasoning, which not only involves perception but also provides explanations and facilitates communication based on video understanding. To this end, I will explore multimodal foundation models incorporating self-judgment to improve video understanding and commonsense reasoning.

 

Bio

Youngjae Yu is an Assistant Professor of Artificial Intelligence at Yonsei University, focusing on computer vision, natural language processing, and multimodal learning. Before joining Yonsei, he was a researcher at the Allen Institute for AI (AI2). He received his Ph.D. and B.S. in Computer Science and Engineering from Seoul National University. His research interests include video understanding and large language models, with a particular focus on large-scale video dataset curation for multimodal foundation models. His work has earned recognition, including the Best Paper Award at NAACL 2022, as well as two Outstanding Paper Awards at EMNLP 2023 and ACL 2024.




 
¿¹Á¾Ã¶ ±³¼ö(KAIST)
 
Title: Generative AI for Healthcare
 
Abs
ÀÇ·áÀΰøÁö´ÉÀº ÃʱâÀÇ ±ÔÄ¢±â¹Ý ½Ã½ºÅÛ¿¡¼­, µö·¯´×ÀÇ ¹ßÀüÀ» °ÅÄ¡¸é¼­ Æø¹ßÀûÀÎ ¼ºÀåÀ» °Þ¾î¿Ô´Ù. Æ¯È÷ ChatGPT·Î ´ëº¯µÇ »ý¼ºÇü AI ½Ã´ë¿¡´Â,  ÀÇ·á ÀΰøÁö´ÉÀÇ »õ·Î¿î ¹ßÀü ¹æÇâÀ» Á¦½ÃµÇ°í ÀÖ¾î »õ·Î¿î °¡´É¼ºÀ» ³»Æ÷ÇÏ°í ÀÖ´Ù. ¾ÕÀ¸·ÎÀÇ ÀÇ·á AI´Â ´ÜÀÏ ¸ðµ¨ ´ë½Å ´Ù¼öÀÇ Expert ¸ðµ¨ÀÌ »óÈ£ÀÛ¿ëÇÏ°í, À̸¦ ÅëÇÕÇϴ LLMÀÌ ÃÖÁ¾ Áø´Ü °á°ú¸¦ ÀÇ·áÁø¿¡°Ô Á¦°øÇÏ´Â ¹æÇâÀ¸·Î ¹ßÀüÇÒ °¡´É¼ºÀÌ Å©´Ù.  ÀÌ·¯ÇÑ ¹ßÀüÀº ´Ù¾çÇÑ µ¥ÀÌÅ͸¦ È¿°úÀûÀ¸·Î À¶ÇÕÇÏ¿© ÀÓ»ó ÇöÀå¿¡ ½ÇÁúÀûÀÎ µµ¿òÀ» ÁÙ ¼ö ÀÖ´Â ½Ã½ºÅÛÀ» ±¸ÃàÇÏ´Â µ¥ Áß¿äÇÑ ½Ã»çÁ¡À» Á¦°øÇÑ´Ù.
 
Bio
Jong Chul Ye is a Professor at the Kim Jaechul Graduate School of Artificial Intelligence (AI) of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received his B.Sc. and M.Sc. degrees from Seoul National University, Korea, and his PhD from Purdue University. Before joining KAIST, he worked at Philips Research and GE Global Research in New York. He has served as an associate editor of IEEE Trans. on Image Processing, IEEE Computational Imaging,  IEEE Trans. on Medical Imaging and a Senior Editor of IEEE Signal Processing and an editorial board member for Magnetic Resonance in Medicine.  He is an IEEE Fellow, was the Chair of IEEE SPS Computational Imaging TC, and IEEE EMBS Distinguished Lecturer.  He is the Fellow of the Korean Academy of Science and Technology, and the President of the Korean Society for  Artificial Intelligence in Medicine. He is  Chung Moonsoul Mirae Chair and  KAIST Endowed Chair professor. He received various awards including the two most prestigious awards for mathematicians in Korea (Choi Suk-Jung Award, Kum-Kok Award), and Career Achievement Award from Korean Society for Magnetic Resonance in Medicine. His research interest is in machine learning for biomedical imaging and computer vision.