Back to Courses
advancedAdvanced

Fine-Tuning LLMs

Fine-tune Llama, Mistral, and GPT models for specialized tasks. LoRA, QLoRA, PEFT, dataset preparation, evaluation, and deployment on real hardware.

18 hours10 modules1 projects
$22/mo
$49/mo

or $230 lifetime $799

  • Access to all 12 courses
  • All future updates
  • Certificate of completion
  • 30-day money-back guarantee

About This Course

Fine-tuning is how you make a general-purpose LLM a specialist. This course covers the complete fine-tuning workflow: dataset curation and formatting, training with LoRA and QLoRA for efficiency, evaluating trained models, and deploying them to production. You'll work with Llama 3, Mistral 7B, and the OpenAI fine-tuning API on real tasks like domain-specific question answering, style transfer, and function calling.

What You'll Learn

  • Decide when fine-tuning is worth it vs. prompting or RAG
  • Curate and format high-quality training datasets for any task
  • Fine-tune Llama 3 and Mistral with LoRA for efficient training
  • Apply QLoRA to fine-tune on consumer hardware (single A100)
  • Use the Hugging Face ecosystem fluently: Transformers, PEFT, TRL
  • Track experiments with Weights & Biases
  • Evaluate fine-tuned models rigorously with held-out test sets
  • Deploy fine-tuned models with vLLM for production serving

Who Is This For?

ML Engineers

Ready to move beyond prompting APIs to customize model behavior for specific domains

AI Researchers

Need practical fine-tuning skills to test hypotheses and publish results

Domain Specialists

Building specialized AI tools in legal, medical, scientific, or other technical fields

Prerequisites

  • Understanding LLMs
  • Python for AI
  • Basic familiarity with PyTorch helpful

Tools & Technologies

PythonHugging Face TransformersPEFT/LoRAPyTorchWeights & BiasesModal/RunPod