Loading...

ÇмúÇà»ç

Korean AI Association

  >   ÇмúÇà»ç   >   ±¹³»Çмú´ëȸ

±¹³»Çмú´ëȸ

¿¬»ç ¹× ÃÊ·Ï (5¿ù 29ÀÏ)
 
¿ìÈ«¿í ±³¼ö(¼º±Õ°ü´ëÇб³)
 
Title: Agentic AI Overview
 
Abs
ÃÖ±Ù AI ±â¼ú µ¿Çâ Áß Çϳª·Î ¶°¿À¸¥ Agentic AI´Â ¸ñÇ¥¸¦ ¼³Á¤ÇÏ°í °èȹÇϸç, ´Éµ¿ÀûÀ¸·Î ¹®Á¦¸¦ ÇØ°áÇÏ´Â ÀÚÀ²¼ºÀ» °®´Â °ÍÀÌ Æ¯Â¡ÀÔ´Ï´Ù. º» °­ÀÇ¿¡¼­´Â ÀÌ·¯ÇÑ Agentic AI ±¸ÇöÀÇ Áß½ÉÀÌ µÇ´Â µðÀÚÀÎ ÆÐÅÏÀÎ Àڱ⠹ݿµ(reflection), µµ±¸ Ȱ¿ë(tool use), °èȹ(planning), ±â¾ï(memory), Çù·Â(collaboration) µîÀ» ¼Ò°³ÇÕ´Ï´Ù. ¶ÇÇÑ ´Ù¾çÇÑ »ê¾÷ ºÐ¾ß¿¡¼­ÀÇ Agentic AI ±â¼ú Àû¿ë »ç·Ê¸¦ »ìÆìº¸°í, ÀÌ·¯ÇÑ ±â¼úÀÌ ¹ü¿ë ÀΰøÁö´É(AGI)À¸·ÎÀÇ ÁøÈ­¿Í ¾î¶² ¿¬°ü¼ºÀ» °®´ÂÁöµµ ÇÔ²² ³íÀÇÇÕ´Ï´Ù.
 
Bio
. ¹Ú»ç Computer Science, University of Texas at Austin
. ¼º±Õ°ü´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú, Áö´ÉÇü¼ÒÇÁÆ®¿þ¾îÇаú, ÀΰøÁö´ÉÇаú ºÎ±³¼ö
. (Àü) »ï¼ºÀüÀÚ »ï¼º¸®¼­Ä¡ »ó¹«

 
ÀÌÁ¾¿í ±³¼ö(¼º±Õ°ü´ëÇб³)
 
Title: Search and Reasoning for Agentic AI
 
Abs
º» °­ÀÇ¿¡¼­´Â ÃÖ±Ù ÁÖ¸ñ¹Þ°í ÀÖ´Â ´Éµ¿Àû ÀΰøÁö´É ¹æ½ÄÀÇ Agentic AIÀÇ ±¸Çö¿¡ ÇÙ½ÉÀûÀ¸·Î »ç¿ëµÇ´Â ±â¼úÀÎ °Ë»ö Áõ°­ »ý¼º(Retrieval-Augmented Generation, RAG)ÀÇ È°¿ë ¹æ¹ý°ú ±× ÀÀ¿ë °¡´É¼º¿¡ ´ëÇØ ´Ù·ì´Ï´Ù. ƯÈ÷, ¿ÜºÎ Áö½Ä ±â¹Ý°úÀÇ ¿¬°è¸¦ ÅëÇØ º¸´Ù Á¤±³ÇÑ ÀÀ´ä »ý¼ºÀ» °¡´ÉÇÏ°Ô ÇÏ´Â RAG ÇÁ·¹ÀÓ¿öÅ©ÀÇ ±¸Á¶Àû Ư¡°ú ±¸Çö »ç·Ê¸¦ ¼Ò°³Çϰí, À̸¦ ±â¹ÝÀ¸·Î ¿¡ÀÌÀüÆ®°¡ º¸´Ù ´Éµ¿ÀûÀ¸·Î ¹®Á¦¸¦ ÇØ°áÇÒ ¼ö ÀÖµµ·Ï µ½´Â ´Ù¾çÇÑ Ãß·Ð Àü·«À» »ìÆìº¾´Ï´Ù. ±¸Ã¼ÀûÀ¸·Î, Àڱ⠹ݿµ(Self-reflection) ¸ÞÄ¿´ÏÁòÀ» ÅëÇÑ Ãß·Ð °³¼±, µµ±¸ Ȱ¿ë(Tool use)À» ÅëÇÑ º¹ÀâÇÑ ÀÛ¾÷ ¼öÇàÀÇ ÃֽŠ¿¬±¸ µ¿ÇâÀ» Áß½ÉÀ¸·Î ±¸¼ºµË´Ï´Ù. À̸¦ ÅëÇØ Agentic AIÀÇ ½ÇÇö °¡´É¼º°ú °úÁ¦¸¦ ±â¼úÀû °üÁ¡¿¡¼­ Á¶¸ÁÇϰí, ÇâÈÄ ¿¬±¸ÀÇ ¹æÇ⼺À» Á¦½ÃÇϰíÀÚ ÇÕ´Ï´Ù.
 
Bio
- 2016-ÇöÀç: ¼º±Õ°ü´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú ºÎ±³¼ö
- 2014-2016: Çѱ¹¿Ü±¹¾î´ëÇб³ ÄÄÇ»ÅͰøÇÐ Á¶±³¼ö
- 2012-2014: Pennsylvania State University, ¹Ú»çÈÄ¿¬±¸¿ø
- 2006-2012: Æ÷Ç×°ø°ú´ëÇб³ ÄÄÇ»ÅͰøÇÐ, ¼®¹Ú»ç
- 1999-2006: ¼º±Õ°ü´ëÇб³ ÄÄÇ»ÅͰøÇÐ, Çлç

 
À̱â¹Î ±³¼ö(KAIST)
 
Title: Safe Agentic AI
 
Abs
As foundation models evolve, they are being equipped with an increasing range of modalities and sophisticated tools. This evolution is leading to the creation of more autonomous systems through advanced agent architectures that incorporate elements of planning and memory. As these systems are made more agentic, this could unlock a wider range of beneficial use-cases, but also introduces new challenges in ensuring that such systems are trustworthy. This talk will explore the dual aspects of opportunity and risk presented by agentic systems. We will discuss the necessity for proactive strategies in assessing and mitigating risks associated with these technologies.
 
Bio
Kimin Lee is an assistant professor at the Graduate School of AI at KAIST. He is interested in building safe and capable AI agents. His recent research directions are (1) reinforcement learning from human feedback, (2) decision-making agents, (3) AI safety and (4) world models. Before joining KAIST, Kimin Lee was a research scientist at Google Research in Mountain View. He completed his postdoctoral training at UC Berkeley (advised by Prof. Pieter Abbeel) and received his Ph.D. from KAIST (advised by Prof. Jinwoo Shin). During his Ph.D., he also interned and collaborated closely with Honglak Lee at the University of Michigan.