Development Of Deep Learning Models And Algorithms For Language Processing In Uzbek

loading.default
thumbnail.default.alt

item.page.date

item.page.journal-title

item.page.journal-issn

item.page.volume-title

item.page.publisher

Zien Journals

item.page.abstract

This article focuses on the development of deep learning models and algorithms specifically designed for Uzbek language processing within the IT field. A comprehensive approach involving data collection, preprocessing, model selection, and evaluation was employed. Experiments with RNN, LSTM, and transformer-based models like BERT and GPT were conducted, with transformer models yielding superior results. Key challenges included limited datasets and the complex morphological structure of Uzbek. The findings suggest that fine-tuned transformer models, especially with language-specific preprocessing, can significantly improve performance in language understanding tasks for low-resource languages

item.page.description

item.page.citation

item.page.collections

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced