Moving beyond post hoc explainable artificial intelligence: A perspective paper on lessons learned from dynamical climate modeling

AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this review and perspective paper, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can sometimes be attributed to specific components or sub-models, such as when model bias is explained by pointing to a particular parameterization. We propose three types of understanding as a basis to evaluate trust in dynamical and AI models alike: (1) instrumental understanding, which is obtained when a model has passed a functional test; (2) statistical understanding, obtained when researchers can make sense of the modeling results using statistical techniques to identify input–output relationships; and (3) component-level understanding, which refers to modelers' ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well. We demonstrate how component-level understanding has been sought and achieved via climate model intercomparison projects over the past several decades. Such component-level understanding routinely leads to model improvements and may also serve as a template for thinking about AI-driven climate science. Currently, XAI methods can help explain the behaviors of AI models by focusing on the mapping between input and output, thereby increasing the statistical understanding of AI models. Yet, to further increase our understanding of AI models, we will have to build AI models that have interpretable components amenable to component-level understanding. We give recent examples from the AI climate science literature to highlight some recent, albeit limited, successes in achieving component-level understanding and thereby explaining model behavior. The merit of such interpretable AI models is that they serve as a stronger basis for trust in climate modeling and, by extension, downstream uses of climate model data.

To Access Resource:

Questions? Email Resource Support Contact:

  • opensky@ucar.edu
    UCAR/NCAR - Library

Resource Type publication
Temporal Range Begin N/A
Temporal Range End N/A
Temporal Resolution N/A
Bounding Box North Lat N/A
Bounding Box South Lat N/A
Bounding Box West Long N/A
Bounding Box East Long N/A
Spatial Representation N/A
Spatial Resolution N/A
Related Links

Related Preprint #1 : GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

Related Preprint #2 : Achieving Conservation of Energy in Neural Network Emulators for Climate Modeling

Related Preprint #3 : Fourier Neural Operator for Parametric Partial Differential Equations

Related Preprint #4 : FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators

Related Preprint #5 : Respecting causality is all you need for training physics-informed neural networks

Related Preprint #6 : Using Explainability to Inform Statistical Downscaling Based on Deep Learning Beyond Standard Validation Approaches

Related Preprint #7 : Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Related Preprint #8 : Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere

Additional Information N/A
Resource Format PDF
Standardized Resource Format PDF
Asset Size N/A
Legal Constraints

Copyright author(s). This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


Access Constraints None
Software Implementation Language N/A

Resource Support Name N/A
Resource Support Email opensky@ucar.edu
Resource Support Organization UCAR/NCAR - Library
Distributor N/A
Metadata Contact Name N/A
Metadata Contact Email opensky@ucar.edu
Metadata Contact Organization UCAR/NCAR - Library

Author O'Loughlin, R. J.
Li, D.
Neale, Richard
O'Brien, T. A.
Publisher UCAR/NCAR - Library
Publication Date 2025-02-11T00:00:00
Digital Object Identifier (DOI) Not Assigned
Alternate Identifier N/A
Resource Version N/A
Topic Category geoscientificInformation
Progress N/A
Metadata Date 2025-07-10T19:54:28.711015
Metadata Record Identifier edu.ucar.opensky::articles:42870
Metadata Language eng; USA
Suggested Citation O'Loughlin, R. J., Li, D., Neale, Richard, O'Brien, T. A.. (2025). Moving beyond post hoc explainable artificial intelligence: A perspective paper on lessons learned from dynamical climate modeling. UCAR/NCAR - Library. https://n2t.net/ark:/85065/d7v410k8. Accessed 07 August 2025.

Harvest Source