Increasing attention on explainable AI has driven a proliferation of methods; the gap between these methods and implementation needs to be bridged.

European policymakers have identified transparency as a core component of trustworthy AI. One technical response to this policy demand is the development of explainable AI; AI tools whose workings or outputs can be understood and scrutinised by human users. Growing interest across the AI community in the challenge of how to create explainable AI methods has resulted in a proliferation of approaches to explainable AI. While technically feasible, the extent to which many of these methods address the needs of real-world users is not clear. Further progress to deliver trustworthy AI in practice will require action to bridge this gap between technical capabilities and the needs of different communities affected by AI. Countervailing trends in the development of large models that are less interpretable and a lack of methods for assessing performance in practice are likely to influence progress in addressing these challenges.