Challenges and limitations of AI systems in medical decisionmaking:Implications for trust, reliability, and clinical practice

loading.default
thumbnail.default.alt

item.page.date

item.page.journal-title

item.page.journal-issn

item.page.volume-title

item.page.publisher

Genius Journals

item.page.abstract

Artificial Intelligence (AI) systems are increasingly used to support medical decisionmaking; however, their reliability in real-world clinical settings remains constrained by several fundamental limitations. While AI models often perform well in controlled research environments, their application in high-stakes clinical decisions raises concerns related to data quality, model confidence, and transparency. Clinical datasets are frequently biased, incomplete, or context-specific, which limits the generalizability of AIdriven recommendations across diverse patient populations. In addition, many AI systems present predictions with high confidence while failing to communicate uncertainty, increasing the risk of automation bias and overreliance in clinical practice. The lack of explainability in advanced AI models further complicates trust, accountability, and effective human–AI collaboration. This paper examines the key challenges and limitations of AI systems in medical decision-making, focusing on data-related constraints, model overconfidence, and explainability. It argues that the safe integration of AI into clinical practice requires trustworthy, uncertainty-aware, and human-centered decision-support systems rather than purely accuracy-driven models

item.page.description

item.page.citation

item.page.collections

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced