AUTOMATIC TEST GENERATION USING LARGE LANGUAGE MODELS

dc.contributor.authorDamir Rakhmaev
dc.date.accessioned2025-12-28T10:50:28Z
dc.date.issued2025-10-31
dc.description.abstractThis article explores approaches to using LLM to generate various test types, including unit tests, integration scenarios, and specification-based tests. A comparative analysis of the advantages and limitations of this approach in the context of test automation is provided. Particular attention is paid to assessing the quality of generated tests, the interpretability of LLM decisions, and identifying promising areas for further research and the practical implementation of these technologies.
dc.formatapplication/pdf
dc.identifier.urihttps://usajournals.org/index.php/2/article/view/1736
dc.identifier.urihttps://asianeducationindex.com/handle/123456789/4391
dc.language.isoeng
dc.publisherModern American Journals
dc.relationhttps://usajournals.org/index.php/2/article/view/1736/1816
dc.rightshttps://creativecommons.org/licenses/by/4.0
dc.sourceModern American Journal of Engineering, Technology, and Innovation; Vol. 1 No. 7 (2025); 108-116
dc.source3067-7939
dc.subjectlarge language models, automated testing, code generation, unit tests, software engineering.
dc.titleAUTOMATIC TEST GENERATION USING LARGE LANGUAGE MODELS
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typePeer-reviewed Article

item.page.files

item.page.filesection.original.bundle

pagination.showing.labelpagination.showing.detail
loading.default
thumbnail.default.alt
item.page.filesection.name
rakhmaev_2025_automatic_test_generation_using_large_la.pdf
item.page.filesection.size
321.11 KB
item.page.filesection.format
Adobe Portable Document Format