BUILDING TRUST IN AI SYSTEMS: STRATEGIES FOR TRANSPARENCY AND TRUSTWORTHINESS IN CRITICAL SECTORS
loading.default
item.page.date
item.page.authors
item.page.journal-title
item.page.journal-issn
item.page.volume-title
item.page.publisher
Web of Journals Publishing
item.page.abstract
As Artificial Intelligence (AI) systems increasingly mediate decisions in high-stakes sectors like finance and law, public trust has emerged as a crucial determinant of their success and ethical viability. This study examines the sociotechnical foundations of trust in AI and presents a comprehensive framework for enhancing transparency, interpretability, and accountability. By synthesizing insights from interdisciplinary literature and real-world applications, this paper identifies practical strategies—explainability, algorithmic auditing, participatory design, and regulatory alignment—that can guide the responsible deployment of AI in sensitive domains. The findings underscore that fostering trust requires not only technical rigor but also ethical foresight and institutional transparency.