In this course, you’ll explore the concepts of PyTorch and Hugging Face and their differences. You’ll also understand how to use pre-trained transformers for language tasks and fine-tune them for special tasks. Further, you’ll fine-tune generative AI models using PyTorch and Hugging Face. Finally, you’ll learn parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QloRA), model quantization, and prompting in transformers.
By the end of the course, you will be able to: