Making the black box more transparent: Understanding the physical implications of machine learning
This paper synthesizes multiple methods for machine learning (ML) model interpretation and visualization (MIV) focusing on meteorological applications. ML has recently exploded in popularity in many fields, including meteorology. Although ML has been successful in meteorology, it has not been as widely accepted, primarily due to the perception that ML models are "black boxes," meaning the ML methods are thought to take inputs and provide outputs but not to yield physically interpretable information to the user. This paper introduces and demonstrates multiple MIV techniques for both traditional ML and deep learning, to enable meteorologists to understand what ML models have learned. We discuss permutation-based predictor importance, forward and backward selection, saliency maps, class-activation maps, backward optimization, and novelty detection. We apply these methods at multiple spatiotemporal scales to tornado, hail, winter precipitation type, and convective-storm mode. By analyzing such a wide variety of applications, we intend for this work to demystify the black box of ML, offer insight in applying MIV techniques, and serve as a MIV toolbox for meteorologists and other physical scientists.
document
http://n2t.net/ark:/85065/d7251nb6
eng
geoscientificInformation
Text
publication
2016-01-01T00:00:00Z
publication
2019-11-01T00:00:00Z
Copyright 2019 American Meteorological Society.
None
OpenSky Support
UCAR/NCAR - Library
PO Box 3000
Boulder
80307-3000
name: homepage
pointOfContact
OpenSky Support
UCAR/NCAR - Library
PO Box 3000
Boulder
80307-3000
name: homepage
pointOfContact
2023-08-18T19:08:10.413046