Speaker
Description
The rapid proliferation of large language models (LLMs) has opened new horizons for personalized, AI‑augmented learning. However, many institutions remain hesitant to adopt cloud‑based services due to privacy, bandwidth, and cost concerns. This hands‑on workshop demonstrates how to deploy an Ollama server for hosting locally run LLMs, directly on local infrastructure, and how to integrate it with Moodle. Participants will learn: (1) how to install Ollama and pull models; (2) how to expose the Ollama API and connect it to Moodle using either the Ollama or OpenAI APIs; (3) how to configure Moodle's AI Placements to make it available; and (4) how to customize models to create unique learning interactions for educational use. The workshop is mainly aimed at Moodle administrators, teachers, and instructional designers who wish to deploy AI locally without sacrificing control over student data.