Site
»çÀÌÆ®
ȸ¿øÁ¤º¸
³í¹®
ÇÐȸ¼Ò°³
ÀΰøÁö´ÉÇÐȸ
ȸÀåÀλç
¿¬Çõ
ÀÓ¿ø¸í´Ü
Á¤°ü
ÇùÂù±â°ü
¿À½Ã´Â ±æ
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
³í¹®(JARA)/ÃâÆÇ¹°
³í¹®(JARA)
Çмú´ëȸ ³í¹®Áý
°Ô½ÃÆÇ
ÇÐȸ¼Ò½Ä
Çà»ç¾È³»
±¸ÀÎ/±¸Á÷¶õ
ȸ¿øÁ¤º¸
ȸ¿ø°¡ÀԾȳ»
Ưº°È¸¿ø»ç
ÇмúÇà»ç
Korean AI Association
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
2026 ÀΰøÁö´É µ¿°è ´Ü±â°ÁÂ
Àλç±Û
Çà»ç¾È³»
ÇÁ·Î±×·¥
¿¬»ç ¹× ÃÊ·Ï (25ÀÏ)
¿¬»ç ¹× ÃÊ·Ï (26ÀÏ)
¿¬»ç ¹× ÃÊ·Ï (27ÀÏ)
Çà»çÀå ¾È³»
»çÀüµî·Ï
> ÇмúÇà»ç >
±¹³»Çмú´ëȸ
±¹³»Çмú´ëȸ
¿¬»ç ¹× ÃÊ·Ï (25ÀÏ)
À念Àç ±³¼ö(KAIST)
Title:
Á¦Á¶ ÇÇÁöÄà AI & SDF
Abs
:
º» ¼¼¹Ì³ª¿¡¼´Â Á¦Á¶ ÇÇÁöÄà AI¿Í ¼ÒÇÁÆ®¿þ¾î Áß½ÉÀû °øÀå (SDF)¿¡ ´ëÇÑ °³³ä°ú ½ÇÁ¦ »ç·Ê¸¦ ¼Ò°³ÇÑ´Ù.
AIÀÇ ¹°¸®Àû ¿ä¼Ò¸¦ °í·ÁÇÑ ½Ã½ºÅÛÀ» ÇÇÁöÄà AI¶ó Á¤ÀÇÇϰí ÀÖ´Ù. ÀÚÀ²ÁÖÇà ÀÚµ¿Â÷ ¹× ·Îº¿ÀÌ ÇÇÁöÄà AIÀÇ ´ëÇ¥ÀûÀÎ »ç·Ê´Ù.
±×·¯³ª Á¦Á¶ ÇÇÁöÄà AI´Â °Å´ë ½Ã½ºÅÛ°ú ´Ù¾çÇÑ º¹ÇÕ Àåºñ¸¦ ´Ù·ëÀ¸·Î ±× ±¸¼º°ú ¿î¿µÀÇ Â÷º°¼ºÀÌ ÇÊ¿äÇÏ´Ù.
ÀÌ·¯ÇÑ Â÷º°Á¡ÀÇ ÇÙ½ÉÀÌ ¹Ù·Î SDFÀÌ´Ù. ¼ÒÇÁÆ®¿þ¾î Áß½ÉÀû ¼³°è¸¦ ÅëÇØ °øÀåÀ» ÇϳªÀÇ °Å´ëÇÑ AI½Ã½ºÅÛÀ¸·Î ¼³°èÇϰí À̸¦ ±â¹ÝÀ¸·Î ÀÚÀ²¿î¿µÀ» °¡´ÉÇϰÔÇÏ´Â °ÍÀÌ SDFÀÇ º»ÁúÀÌ´Ù.
½ÇÁ¦ ±¹³» »ç·Ê¸¦ ÅëÇØ ÀÌµé °³³ä ¹× ÀÚÀ²°øÀåÀÌ ¾î¶»°Ô ±¸ÃàµÇ¾ú´ÂÁö »ìÆìº¼ °ÍÀÌ´Ù.
Bio
:
À念Àç ±³¼ö´Â »ê¾÷ ¹× ½Ã½ºÅÛ °øÇаú¿¡¼ "½º¸¶Æ® ÆÑÅ丮" ¹× "Áö´ÉÇü ¹°·ù ¹× °ø±Þ»ç½½¸Á ½Ã½ºÅÛ" °ü·Ã ¿¬±¸¸¦ ÁøÇàÇϰí ÀÖ´Ù. ¶ÇÇÑ 2025³â 12¿ù¿¡ ¼³¸³µÈ <Ä«À̽ºÆ® Á¦Á¶ ÇÇÁöÄà AI ¿¬±¸¼Ò> ¼ÒÀåÀ» ¿ªÀÓÇϰí ÀÖ´Ù.
Ä«À̽ºÆ® ºÎÀÓ Àü ¹Ì±¹ ¹ÝµµÃ¼ ¸Þ¸ð¸® Á¦Á¶»çÀÎ ¸¶ÀÌÅ©·Ð Å×Å©³î·ÎÁö (Micron Technology)¿¡¼ 4³â°£ ÇöÀå¿¡¼ °øÀå ÀÚµ¿È ¹× ¿î¿µ °ü·Ã ¾÷¹«¸¦ ¼öÇàÇÏ¿´´Ù. MIT °ø´ë ±â°è °øÇаú¿¡¼ ¹Ú»çÇÐÀ§¸¦ ÃëµæÇß´Ù. 2020³â¿¡´Â ¿¬±¸½Ç ¹Ú»ç Á¹¾÷»ý 4¸í°ú ÇÔ²² <´ÙÀÓ¸®¼Ä¡>¸¦ â¾÷Çß´Ù.
ÇöÀç ±¹Á¦Àú³ÎÀÎ Computers and Industrial Engineering (SCIE:2.62) ÀÇ ºÎÆíÁýÀå (Associate Editor)¸¦ ¸Ã°í ÀÖÀ¸¸ç International Journal of Production Research, IEEE Power Electronics µî ±¹Á¦ Àú³Î Ưº°È£ÀÇ ÆíÁýÀå ¹× ºÎ ÆíÁýÀå°ú, ¹ÝµµÃ¼ ¿î¿µ Àü¹®°¡µéÀÇ ±¹Á¦ ÇÐȸÀÎ (International Symposium on Semiconductor Intelligence) ¹× Winter Simulatoin Conference MASMÀÇ ÇÐȸÀåÀ» ¿ªÀÓÇß´Ù.
ÀÌ°Ç¸í ±³¼ö(ÃæºÏ´ëÇб³)
Title:
Overview of VLAs in Physical AI
Abs
:
With demos of humanoids such as Atlas, Optimus V3, and Unitree G1 at CES 2026, along with the unveiling of concrete humanoid commercialization roadmaps, interest in Physical AI is rapidly intensifying. This lecture provides an overview of Physical AI and examines the key building blocks of Vision–Language–Action (VLA) models, the most representative form of foundation models for Physical AI.
The talk also introduces major VLA models, including open-source systems such as OpenVLA, Octo, GR00T N1, Pi_0, Pi_0.5, Pi_0.6, SmolVLA, and X-VLA, as well as notable closed models such as Google Robotics and FigureAI Helix. Finally, it covers the open-source development ecosystem LeRobot and discusses the roles of world models and vision foundation models in enabling robust, scalable robot learning and control.
Bio
:
Keon Myung Lee is a professor in the School of Computer Science at Chungbuk National University. He earned his B.S., M.S., and Ph.D. in Computer Science from KAIST. He conducted postdoctoral research at INSA de Lyon in France and worked as a researcher at Park Scientific Instruments in the United States. He has also held visiting appointments as a visiting professor at University of Colorado Denver and as a visiting scholar at Indiana University. He currently serves as President of the Korean Institute of Intelligent Systems.
ÁÖÇѺ° ±³¼ö(¼¿ï´ëÇб³)
Title:
Towards Capturing Everyday Movements to Scale Up and Enrich Human Motion Data
Abs
:
Equipping AI and robotic systems with the ability to understand human behavior in everyday life is essential for enabling them to better assist people across a wide range of applications. However, the availability of high-quality human motion data for learning such knowledge remains extremely limited.
In this talk, I will present our lab’s efforts to scale and enrich 3D human motion datasets by capturing everyday human movements and natural human-object interactions. I will first introduce ParaHome, our new multi-camera system designed to capture human-object interactions in natural home environments. Next, I will present MocapEvery, a lightweight and cost-effective motion capture solution that uses two smartwatches and a head-mounted camera to enable full-body 3D motion capture across diverse settings. Finally, I will discuss our recent work that enables machines to model comprehensive affordances for 3D objects by leveraging pre-trained 2D diffusion models, allowing for unbounded object interaction capabilities.
Bio
:
Hanbyul Joo is an assistant professor at Seoul National University (SNU) in the Department of Computer Science and Engineering. Before joining SNU, Hanbyul was a Research Scientist at Facebook AI Research (FAIR), Menlo Park. Hanbyul received his PhD from the Robotics Institute at Carnegie Mellon University, Hanbyul is a recipient of the Samsung Scholarship and the Best Student Paper Award in CVPR 2018.
±èÀ¯¼º ±³¼ö(¼º±Õ°ü´ëÇб³)
Title:
Beyond Imitation: Self-Improving Vision–Language–Action Models
Abs
:
Vision–Language–Action (VLA) models have rapidly advanced by learning robot behaviors from large-scale demonstration data.
However, in real-world physical environments, contact, uncertainty, and environmental variations are inevitable, revealing the limitations of imitation learning alone.
In this talk, we focus on how VLA models pretrained on demonstrations can move beyond imitation through experience gathered during execution.
We introduce reinforcement learning–based self-improvement approaches that leverage online interaction and human-in-the-loop feedback as learning signals.
Through fine-grained manipulation tasks, we discuss why self-improvement is a core capability of Physical AI.
Bio
:
Yusung Kim is an Associate Professor at Sungkyunkwan University. He received his Ph.D. in Computer Science from KAIST. He was a Visiting Researcher at North Carolina State University and worked as a Senior Researcher at Samsung Electronics. His research focuses on reinforcement learning, imitation learning, and vision-language-action (VLA) models for embodied and physical AI systems.