> 학술행사 > 국내학술대회
Large language models (LLMs) have demonstrated remarkable performance across diverse domains. For instance, in the legal domain, GPT-4 successfully passes the uniform bar exam. However, the same model fails to pass the Chinese lawyer qualification test and shows suboptimal performance in various other legal AI tasks. Why LLMs display such divergent performance?
This tutorial lecture delves into the underlying technology of LLMs. Starting with a concise introduction to natural language processing, we will briefly review the basics of Transformer neural architecture and study what happens when the model and data scale. Afterward, we will examine the algorithms that empower LLMs to understand and follow instructions. Finally, we will wrap-up the lecture by covering the recent trends in LLMs.