Explainable AI (XAI) in Practice: Users’ Perceptions of Transparency and Understanding in Automated Decision Systems
DOI:
https://doi.org/10.62019/r2vyk412Abstract
The research explores user perceptions of transparency and interpretability of explainable artificial intelligence (XAI) systems and the quality of the given explanations to understand how trust, understanding, and ethical judgment of automated decisions are formed. Using a qualitative design, 16 participants with different professional experiences were interviewed using semi-structured questions regarding their experience with AI-based decision systems. The data were evaluated using manual thematic analysis in order to determine major patterns and the meaning of narrative in the stories of the participants. The participants underlined the importance of transparency that should be explained in a clear way, and should have a contextual and defensive quality. Clarity and active participation were encouraged through clear communication, whereas the lack of clarity and excessive technicality resulted in confusion and doubt. The research discovered that transparency is seen not just as a technical aspect but as a relationship and moral construct that is linked to fairness and respect for the autonomy of users. The results reveal the significance of conceptualising the XAI systems by focusing on the user interpretability, ethics responsibility, and communicative effectiveness. Clear descriptions of the ways people should be capable of closing the divide between AI logic and human logic to boost public confidence in AI. The study concludes that explainability ought not only to be treated as a form of afterthought in AI design, but also as a pillar of technology innovation that is human-centred.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Nadeem Ahmad Malik, Sakeena Parveen, Irfan Hanif

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
