Explainable AI Helps Bridge the AI Skills Gap: Evidence from a Large Bank
Abstract
Advances in machine learning have created an “AI skills gap” both across and within firms. As AI becomes embedded in firm processes, it is unknown how this will impact the digital divide between workers with and without AI skills. In this paper we ask whether managers trust AI to predict consequential events, what manager characteristics are associated with increasing trust in AI predictions, and whether explainable AI (XAI) affects users’ trust in AI predictions. Partnering with a large bank, we generated AI predictions for whether a loan will be late in its final disbursement. We embedded these predictions into a dashboard, surveying 685 analysts, managers and other workers before and after viewing the tool to determine what factors affect workers’ trust in AI predictions. We further randomly assigned some managers and analysts to receive an explainable AI treatment that presents Shapely breakdowns explaining why a model classified their loan as delayed and measures of model performance. We find that i) XAI is associated with greater perceived usefulness but less perceived understanding of the machine learning predictions; ii) Certain AI-reluctant groups – in particular senior managers and those with less familiarity with AI – exhibit more reluctant to trust the AI predictions overall; iii) Greater loan complexity is associated with higher degree of trust in the ML predictions; and iv) Some evidence that AI-reluctant groups respond more strongly to XAI. These results suggest that the design of machine learning models will determine who benefits from advances in ML in the workplace.
This seminar will take place in T09-67. To join online, find the details below:
https://eur-nl.zoom.us/j/96886971957