스탠포드 대학 LLM 무료 강의
cme295.stanford.edu · Dec 27, 2025
cme295.stanford.edu · Dec 27, 2025
RNN과 CNN을 사용하지 않고 Attention 만 사용하는 신경 기계 번역 Transformer를 제안함Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention), LLMs: prompting, finetuning, preference tuning, optimization techniques, Applications: LLM-as-a-judge, RAG, agents, reasoning models.