In this course, you’ll learn to generate responses using retrieval-augmented generation (RAG), and how RAG involves encoding prompts into vectors, sorting them, and retrieving related information. You’ll also explore RAG, encoder, and Facebook AI similarity search (FAISS). Further, you’ll explore prompt engineering and in-context learning, where tasks are provided to the model as a prompt. In advanced methods of prompt engineering, you’ll learn zero-shot prompts, few-shot prompts, chain-of-thought (CoT) prompting, and self-consistency. Finally, you’ll explore LangChain and its components, such as documents, chains, and agents.
By the end of the course, you will be able to:
Describe retrieval-augmented generation (RAG), encoders, and Faiss.
Apply fundamentals of in-context learning and advanced methods of prompt engineering to enhance prompt design.
Explain the LangChain concept, tools, components, chat models, chains, and agents.
Apply RAG, PyTorch, Hugging Face, LLMs, and LangChain technologies to different applications to acquire job-ready skills.
Explains the RAG process.
Covers the Dense Passage Retrieval (DPR) context encoder and question encoder.
It also introduces the tokenizers associated with these encoders.
Introduce the Facebook AI Similarity Search (FAISS) library from Facebook AI Research for efficient vector searching.