Development Of Deep Learning Models And Algorithms For Language Processing In Uzbek
loading.default
item.page.date
item.page.authors
item.page.journal-title
item.page.journal-issn
item.page.volume-title
item.page.publisher
Zien Journals
item.page.abstract
This article focuses on the development of deep learning models and algorithms specifically designed for Uzbek language processing within the IT field. A comprehensive approach involving data collection, preprocessing, model selection, and evaluation was employed. Experiments with RNN, LSTM, and transformer-based models like BERT and GPT were conducted, with transformer models yielding superior results. Key challenges included limited datasets and the complex morphological structure of Uzbek. The findings suggest that fine-tuned transformer models, especially with language-specific preprocessing, can significantly improve performance in language understanding tasks for low-resource languages