Behind the Secrets of Large Language Models (WiSe 25/26)
TU Dresden | Winter semester 2025 / 2026
Behind the Secrets of Large Language Models
This course provides a practical and in-depth understanding of large language models that power modern natural language processing systems. Students will explore the architecture, training methodologies, capabilities, and ethical implications of LLMs. The course combines theoretical knowledge with hands-on experience to equip students with the skills necessary to develop, analyze, and apply LLMs in various contexts.
Lecture by Michael Färber and Simon Razniewski, winter term 2025/26.
- Lecture: Mondays, 11:10-12:40, FOE/244
- Lab: Mondays, 16:40-18:10, FOE/244
The course is divided into three parts:
- Foundations (lectures 1-4)
- Building and training LLMs (lectures 5-9)
- Using and extending LLMs (lectures 10-15)
Tentative schedule:
| # | Date | Lecture |
| 1 | 13.10. | Intro (Razniewski) |
| 2 | 20.10. | Word representation (Razniewski) |
| 3 | 27.10. | Neural networks (Färber) |
| 4 | 03.11. | Deep Learning+Attention (Färber) |
| 5 | 10.11. | Training data (Razniewski) |
| 6 | 17.11. | Architectures (Färber) |
| 7 | 24.11. | Training (Razniewski) |
| 8 | 01.12. | Transfer learning (Färber) |
| 9 | 08.12. | Evaluation (Razniewski) |
| 10 | 15.12. | Applications (Färber) |
| 11 | 05.01. | KGs/RAG (Razniewski) |
| 12 | 12.01. | Vision LMs (Haase) |
| 13 | 19.01. | Agents I (Haase) |
| 14 | 26.01. | Agents II (TBD) |
| 15 | 02.02. | Ethics and safety (Razniewski) |
Loading Assessment overview
Loading overview