Have a request for an upcoming news/science story? Submit a Request

[RCAC Workshop]

📅 Date: April 3rd 2026 ⏰ Time: 1:00PM-2:00PM 💻 Location: VIRTUAL 🏫 Instructor: Christina Joslin

Who Should Attend Researchers and students who want to learn more about Large Language Model (LLM) fine-tuning and compression techniques such as Low-Rank Adaptation (LoRA) and QLoRA, and learn the step-by-step process for creating production-ready fine-tuned LLMs.

What You’ll Learn The motivation behind LLM fine-tuning and the differences between full fine-tuning and parameter-efficient fine-tuning (PEFT) How Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), and other popular PEFT methods work Key LoRA hyperparameters, their roles, and recommended starting values The fundamentals of LLM compression and quantization How to create GGUF files for production-ready deployment A step-by-step walkthrough of a typical LoRA fine-tuning process using Mistral 7B with Unsloth

By the End of the Session, You’ll Learn how to fine-tune your own LLM using LoRA and run it locally with Ollama Understand the architectural differences and tradeoffs between LoRA and other PEFT methods in terms of accuracy and efficiency

Level Intermediate. Familiarity with basic machine learning concepts such as model training and evaluation, neural networks, and Python-based ML workflows is recommended. Prior exposure to LLMs and Ollama is helpful but not required. 🔗 Register now: LINK

Originally posted: