Speaker
Description
Moodle is widely used for quiz-based assessment, with strict import formats for creating and managing question banks. As large language models (LLMs) are increasingly used to support assessment authoring, their practical usefulness in Moodle depends on their ability to generate quiz files that conform to these format requirements and can be imported with minimal instructor intervention.
This study proposes a comparative evaluation of how seamlessly different LLMs can generate Moodle-compatible quizzes when provided with explicit instructor instructions. Specifically, it examines three commonly supported quiz import formats: Aiken, GIFT, and XML. In parallel, the study will compare outputs produced by three LLMs: ChatGPT, Gemini, and DeepSeek. Using a controlled prompting design, each model will be tasked with generating quizzes in each format under identical instructional conditions.
The results section is expected to present descriptive comparisons across both formats and models, including Moodle import outcomes, format-specific error patterns, correction requirements, and instructor effort measures. Results will be organized to contrast quiz format behavior across LLMs and LLM behavior within each quiz format, with an emphasis on workflow characteristics rather than content quality or learning outcomes.
By outlining a structured framework for comparing quiz formats and LLMs within a Moodle-based assessment workflow, this study aims to provide a practical reference for instructors and institutions exploring AI-assisted quiz generation while maintaining alignment with established LMS constraints.
| 発表日の希望 / Preferred Day | 2月28日(土)/ February 28 Saturday |
|---|---|
| MAJ R&D Grant | いいえ |