EVALUATION SUITE FOR UZBEK SPEECH AND BILINGUAL TRANSLATION AT SCALE
loading.default
item.page.date
item.page.authors
item.page.journal-title
item.page.journal-issn
item.page.volume-title
item.page.publisher
American Journals Publishing
item.page.abstract
In this article we present a reproducible evaluation suite for Uzbek speech and bilingual translation at scale. It unifies ASR, TTS, MT, and speech-to-text evaluation across Latin/Cyrillic scripts, dialectal variation, and code-switching. We provide task-specific metrics, error taxonomies and human protocols, and we report baseline scores and reliability to support fair, longitudinal benchmarking.