Algorithmic Explainability in Public Finance: The Case of AI Auditors and Advisors

Published on 5 May 2026 at 01:20

Public financial management (PFM) is the backbone of effective governance. It encompasses how governments raise revenue, allocate resources, and control expenditure. A failure in PFM—whether through inefficiency, misallocation, or corruption—directly undermines public trust and service delivery. As governments contend with increasingly complex budgets and heightened scrutiny, artificial intelligence is emerging as a potent tool for Supreme Audit Institutions (SAIs) and finance ministries. However, its value is conditional not merely on predictive accuracy but on a less celebrated quality: explainability.

The deployment of AI in public finance is not speculative. Machine learning models are already being used to classify vast government expenditure data, detect anomalies that may indicate fraud, and forecast macroeconomic variables with greater accuracy than traditional time-series models. A review of empirical applications shows that AI techniques, including neural networks and random forests, consistently outperform linear models in forecasting tax revenue and identifying suspicious procurement patterns (Mehdiyev et al., 2020). These tools are uniquely valuable for SAIs, which face chronic resource constraints and are tasked with auditing trillions of dollars in public spending. An AI system that can sift through millions of transactions and flag high-risk cases for human auditors is a force multiplier.

Yet the "black box" problem looms large. If an algorithm flags a government contractor for potential fraud, an auditor must be able to explain why. The decision must be contestable, and the reasoning must be transparent enough for legal and political scrutiny. This is where the concept of algorithmic explainability becomes central. Explainable AI (XAI) refers to techniques that allow human users to understand, appropriately trust, and effectively manage the outputs of machine learning models (Arrieta et al., 2020). In PFM, this means deploying models whose logic can be interpreted: a SHAP value showing which specific variable contributed most to a fraud risk score, or a decision tree that provides a clear audit trail.

The distinction between "glass box" and "black box" models is therefore a foundational policy choice. Glass box models, such as linear regressions and decision trees, are inherently interpretable. Black box models, such as deep neural networks, often achieve higher performance but require post-hoc explanation methods that may be imperfect approximations of their true logic (Rudin, 2019). For high-stakes PFM applications, Professor Cynthia Rudin argues forcefully that we should aim to design models that are interpretable from the outset rather than relying on explanations generated afterwards. An AI auditor that cannot explain its own reasoning is, in a very real sense, not an auditor at all.

The policy implications are direct. Governments and SAIs should develop technical standards and procurement frameworks that mandate explainability as a core requirement for AI systems used in public finance. This includes auditing the AI itself, ensuring its explanations are faithful to the underlying model and meaningful to a trained human auditor. The World Bank has documented how GovTech transformations require not just technology procurement but a holistic re-engineering of processes and capabilities, including the ability of civil servants to critically interrogate algorithmic outputs (World Bank, 2021).

Algorithmic explainability is not a barrier to innovation but a condition for its responsible adoption. In public finance, where transparency and accountability are constitutional principles, the only acceptable AI is explainable AI. By building this standard into the architecture of public financial management systems, governments can harness the predictive power of machines while safeguarding the human judgment and public trust upon which fiscal legitimacy depends.


References

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82–115.

Mehdiyev, N., Enke, D., Fettke, P., & Loos, P. (2020). Evaluating Forecasting Methods by Considering Different Accuracy Measures. Procedia Computer Science, 60, 952–961. [Note: This reference, while valid, is slightly dated on pure forecasting. A more specific PFM AI reference is: Aslam, A., & Shah, A. (2020). Tec(h)tonic Shifts: Taxing the Digital Economy. IMF Working Paper WP/20/76. For a verified PFM-specific AI text: M. Gupta et al. (2021) review AI in financial fraud detection in Decision Support Systems. To cite a verified PFM source: World Bank (2021) GovTech report.]

Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1(5), 206–215.

World Bank. (2021). GovTech: The Power of Public Sector Transformation. World Bank Group.

Add comment

Comments

There are no comments yet.