Daffodil International University

Faculty of Science and Information Technology => Game Design => MCT => Game Script => Topic started by: S. M. Monowar Kayser on April 15, 2026, 12:59:14 AM

Title: LLMs and the Reinvention of Quest and Narrative Scripting
Post by: S. M. Monowar Kayser on April 15, 2026, 12:59:14 AM
Game scripting has traditionally been the domain of tightly authored quest graphs, dialogue trees, trigger systems, and domain-specific scripting languages, all designed to preserve narrative coherence and gameplay safety at the cost of authoring scale. The AI revolution has challenged this trade-off by enabling large language models to generate branching narratives, quest descriptions, and systemic rule structures from high-level prompts or sparse constraints. Vartinen et al. (2024) showed that GPT-based models can generate role-playing game quests of uneven but nontrivial quality, while Leandro et al. (2024) introduced GENEVA, a tool that uses GPT-4 to create and visualize branching narrative graphs under structural constraints such as numbers of starts, endings, and storylines. These approaches contrast sharply with classical scripting workflows, where each branch is manually authored and therefore expensive but interpretable. AI scripting systems reduce that marginal cost and can rapidly explore narrative alternatives, yet they remain vulnerable to repetition, causally weak transitions, lore inconsistency, and dramatic structures that look plausible in isolation but collapse across longer play sessions. The core research gap is that narrative quality in games is multidimensional: coherence, emotional pacing, player agency, replay value, and implementation feasibility rarely align, and current LLM-based evaluations capture only fragments of that space. A further limitation is that generated scripts often exist as text artifacts rather than engine-ready assets with explicit state conditions, failure handling, and localization pipelines. Future work should therefore move toward constrained narrative generation in which story graphs, world state models, and design rules are co-generated and mutually validated, allowing AI to expand the narrative search space without dissolving the structural rigor that playable scripting requires. In that model, LLMs would function less as autonomous scriptwriters and more as high-bandwidth collaborators inside a formally grounded narrative toolchain (Vartinen et al., 2024; Leandro et al., 2024; Hu et al., 2024).

References
1. Vartinen, S., Hamalainen, P., & Guckelsberger, C. (2024). Generating role-playing game quests with GPT language models. IEEE Transactions on Games, 16, 127-139.
2. Leandro, J., Rao, S., Xu, M., Xu, W., Jojic, N., Brockett, C., & Dolan, B. (2024). GENEVA: GENErating and visualizing branching narratives using LLMs. In Proceedings of the 2024 IEEE Conference on Games (CoG 2024).
3. Hu, C., Zhao, Y., & Liu, J. (2024). Game generation via large language models. In Proceedings of the 2024 IEEE Conference on Games (CoG 2024).


S. M. Monowar Kayser
Lecturer, Department of Multimedia & Creative Technology (MCT)
Faculty of Science & Information Technology
Daffodil International University (DIU)
Daffodil Smart City, Savar, Dhaka, Bangladesh
Visit: https://monowarkayser.com/ (https://monowarkayser.com/)