Site
»çÀÌÆ®
ȸ¿øÁ¤º¸
³í¹®
ÇÐȸ¼Ò°³
ÀΰøÁö´ÉÇÐȸ
ȸÀåÀλç
¿¬Çõ
ÀÓ¿ø¸í´Ü
Á¤°ü
ÇùÂù±â°ü
¿À½Ã´Â ±æ
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
³í¹®(JARA)/ÃâÆÇ¹°
³í¹®(JARA)
Çмú´ëȸ ³í¹®Áý
°Ô½ÃÆÇ
ÇÐȸ¼Ò½Ä
Çà»ç¾È³»
±¸ÀÎ/±¸Á÷¶õ
ȸ¿øÁ¤º¸
ȸ¿ø°¡ÀԾȳ»
Ưº°È¸¿ø»ç
ÇмúÇà»ç
Korean AI Association
ÇмúÇà»ç
±¹³»Çмú´ëȸ
ºÐ°úÇмú´ëȸ
2026 ÀΰøÁö´É µ¿°è ´Ü±â°ÁÂ
Àλç±Û
Çà»ç¾È³»
ÇÁ·Î±×·¥
¿¬»ç ¹× ÃÊ·Ï (25ÀÏ)
¿¬»ç ¹× ÃÊ·Ï (26ÀÏ)
¿¬»ç ¹× ÃÊ·Ï (27ÀÏ)
Çà»çÀå ¾È³»
»çÀüµî·Ï
> ÇмúÇà»ç >
±¹³»Çмú´ëȸ
±¹³»Çмú´ëȸ
¿¬»ç ¹× ÃÊ·Ï (27ÀÏ)
¸ñÁö¼ö ±³¼ö(DGIST)
Title:
From Chatbots to Companions: Building Smarter, More Reliable AI Systems
Abs
:
Large Language Models (LLMs) and their multimodal counterparts (MLLMs) are rapidly evolving from conversational systems to increasingly autonomous agents. As these AI models take on more human-like roles and interact more deeply with individuals, questions of trustworthiness, reliability, and alignment with human values naturally rise as central research challenges. This seminar first examines emerging research directions toward building human-centric AI agents that can safely and seamlessly be integrated into everyday human life. Looking ahead, it outlines a longer-term vision of adaptive lifetime AI companions that evolve alongside their users.
Bio
:
Jisoo Mok is an Assistant Professor in the Department of Electrical Engineering & Computer Science at DGIST, where she leads the Language-driven Multimodal Intelligence (LaMI) Laboratory. Prior to joining DGIST, she received her Ph.D. from Seoul National University under the supervision of Prof. Sungroh Yoon and her B.S. from Caltech. During her Ph.D. studies, she collaborated extensively with global AI labs through internships at Google Research, Amazon Alexa AI, and NAVER AI Lab. Her current research focuses on building more reliable, trustworthy, and human-like AI systems.
±èÁØ¿µ ±³¼ö(Áß¾Ó´ëÇб³)
Title:
Towards understanding hour-long video
Abs
:
Understanding hour-long videos presents a unique frontier in video-language research, where temporal scale, event sparsity, and semantic hierarchy intersect to form new challenges beyond short-form video analysis. This talk explores the emerging methodologies for video moment retrieval, starting from short-video to hour-long video and video summarization evaluation. Finally, we will discuss ongoing efforts in streaming video understanding.
Bio
:
Junyeong Kim is an assistant professor in the Department of Artificial Intelligence at Chung-Ang University. He received his B.S., M.S., Ph.D. from KAIST, under the supervision of Prof. Chang D. Yoo. His research interest is on developing multi-modal reasoning framework merging CV, NLP, and Audio.
±èÇüÈÆ ±³¼ö(POSTECH)
Title:
Expanding the boundaries of Dialogue System
Abs
:
Conversational agents play a crucial role in enabling natural and effective interactions between humans and machines, serving as a core component in a wide range of applications—from virtual assistants to open-domain dialogue systems that aim to emulate human-like conversational abilities. In this talk, I will present the evolution of conversational models across multiple dimensions, highlighting advancements ranging from dialogue continuity across sessions to the integration of multimodal inputs into conversational systems. The talk will include an overview of the methods and technologies that support more dynamic and context-aware interactions, as well as promising directions for future research aimed at achieving more realistic human–machine communication.
Bio
:
Hyounghun Kim is an assistant professor in the Graduate School of Artificial Intelligence and the Department of Computer Science and Engineering at POSTECH. He earned his Ph.D. from the Department of Computer Science at UNC-Chapel Hill, advised by Prof. Mohit Bansal. His research spans natural language processing and multimodal learning with focus on large language models, conversational agents, commonsense reasoning, image/video QA, as well as embodied AI. He has served as a senior action editor, action editor/area chair, and reviewer for ACL Rolling Review and AI top conferences. His professional experience includes internships at Adobe Research and Amazon Alexa AI, and a position as a software engineer at Samsung Electronics.