[RCAC Workshop]
π Date: February 27th β° Time: 1:00 PM π» Location: VIRTUAL π« Instructor: Mannadeep
Description - Advanced RAG & Vector Databases As AI systems become more widely used in research and enterprise settings, one of the biggest challenges is helping large language models work effectively with an organizationβs own data. Retrieval-Augmented Generation (RAG) provides a practical solution by combining LLMs with fast, accurate retrieval from internal knowledge sources. This is supported by vector databases, which store and search information in a way that aligns with modern embedding models. This session introduces the core concepts behind RAG workflows and explains how vector databases support reliable, relevant, and grounded AI outputs. We will discuss the practical design decisions, how to structure data, choose embeddings, build retrieval pipelines, and evaluate whether the system is working as expected.
Who Should Attend - AI/ML engineers, data scientists, research software engineers, platform engineers, and technical leads who want a practical, up-to-date understanding of how RAG systems work in real environments. This session is ideal for practitioners responsible for building LLM-powered applications, selecting retrieval or vector database tooling, or supporting researchers who need trustworthy, data-grounded AI systems.
Topics - What RAG does and why it matters for modern AI applicationsHow embeddings represent text and why vector databases are used to store themThe basic building blocks of a RAG pipeline: chunking, indexing, retrieval, reranking, and generationArchitectural patterns for building effective retrieval systemsWhat impacts the retrieval quality (better chunking, metadata, hybrid search)How to evaluate the RAG system
Level - Intermediate. Attendees should understand the basics of LLMs, but no prior experience with vector databases or RAG systems is required.
π Register now: LINK