TEACHING CLINICAL REASONING TO MEDICAL STUDENTS USING AI-GENERATED CLINICAL SCENARIO TESTS: A MIXED-METHODS FORMATIVE EVALUATION

dc.contributor.authorMirzalieva Anora Arginbaevna
dc.contributor.authorKamalov Ruslan Kuralbaevich
dc.contributor.authorTurgunboev Samandar Sanjar ugli
dc.contributor.authorNorkulova Mubina Turgun kizi
dc.contributor.authorMurodova Nigina Ilyos kizi
dc.date.accessioned2026-03-10T20:32:12Z
dc.date.issued2026-03-10
dc.description.abstractThe integration of artificial intelligence (AI) into medical education is rapidly evolving, offering new tools to enhance the teaching and assessment of clinical reasoning. One such tool is the Script Concordance Test (SCT), designed to evaluate clinical reasoning under conditions of uncertainty. Traditionally, developing SCT scoring systems and providing feedback requires the involvement of expert panels, which is time-consuming and resource-intensive. Recent advances in generative AI and large language models (LLMs) offer the potential to simulate expert judgment, yet this capability remains underexplored. This study investigated the feasibility of using LLMs to emulate expert clinical judgment in the development, scoring, and feedback of SCTs in cardiology and pulmonology. Fifteen third-year medical students completed a 32-item test generated by ChatGPT-4o. Six LLMs were used as simulated experts, three of which were trained on course materials and three untrained. Students answered test items, rated perceived difficulty, and selected the most helpful feedback explanations. The average score was 22.8 out of 32. Trained models showed higher concordance with student responses (ρ = 0.64) compared to untrained models (ρ = 0.41). AI-generated feedback was considered most useful in 62.5% of cases, particularly when using trained models. These results indicate that trained generative AI models can reliably emulate expert clinical reasoning within the SCT framework. The use of such technology could simplify SCT development while maintaining educational value in feedback. Further research is needed to assess the long-term impact of these tools on the development of clinical reasoning and to determine the optimal balance between expert involvement and AI systems in medical education.
dc.formatapplication/pdf
dc.identifier.urihttps://webofjournals.com/index.php/5/article/view/6091
dc.identifier.urihttps://asianeducationindex.com/handle/123456789/118810
dc.language.isoeng
dc.publisherWeb of Journals Publishing
dc.relationhttps://webofjournals.com/index.php/5/article/view/6091/6122
dc.rightshttps://creativecommons.org/licenses/by-nc-nd/4.0
dc.sourceWeb of Medicine: Journal of Medicine, Practice and Nursing ; Vol. 4 No. 3 (2026): WOM; 34-47
dc.source2938-3765
dc.subjectClinical reasoning; artificial intelligence; medical education; generative AI; simulation of expert judgment.
dc.titleTEACHING CLINICAL REASONING TO MEDICAL STUDENTS USING AI-GENERATED CLINICAL SCENARIO TESTS: A MIXED-METHODS FORMATIVE EVALUATION
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typePeer-reviewed Article

item.page.files

item.page.filesection.original.bundle

pagination.showing.labelpagination.showing.detail
loading.default
thumbnail.default.alt
item.page.filesection.name
arginbaevna_2026_teaching_clinical_reasoning_to_medical_s.pdf
item.page.filesection.size
464.17 KB
item.page.filesection.format
Adobe Portable Document Format

item.page.collections