1. How Transformer LLMs Work

2. Fine Tuning LLMs

3. Post-Training of LLMs (Banghau Zhu)

4. Fine-tuning & RL for LLMs (Sharon Zhou)

5. Nano Chat

6. Attention in Transformers: Concepts and Code in PyTorch

7. RI & RLHF

8. CUDA

9. Mechanistic Interpretability 

10. Papers



5. Github