Testing the tests: What are the impacts of incorrect assumptions when applying confidence intervals or hypothesis tests to compare competing forecasts?

Which of two competing continuous forecasts is better? This question is often asked in forecast verification, as well as climate model evaluation. Traditional statistical tests seem to be well suited to the task of providing an answer. However, most such tests do not account for some of the special underlying circumstances that are prevalent in this domain. For example, model output is seldom independent in time, and the models being compared are geared to predicting the same state of the atmosphere, and thus they could be contemporaneously correlated with each other. These types of violations of the assumptions of independence required for most statistical tests can greatly impact the accuracy and power of these tests. Here, this effect is examined on simulated series for many common testing procedures, including two-sample and paired t and normal approximation z tests, the z test with a first-order variance inflation factor applied, and the newer Hering–Genton (HG) test, as well as several bootstrap methods. While it is known how most of these tests will behave in the face of temporal dependence, it is less clear how contemporaneous correlation will affect them. Moreover, it is worthwhile knowing just how badly the tests can fail so that if they are applied, reasonable conclusions can be drawn. It is found that the HG test is the most robust to both temporal dependence and contemporaneous correlation, as well as the specific type and strength of temporal dependence. Bootstrap procedures that account for temporal dependence stand up well to contemporaneous correlation and temporal dependence, but require large sample sizes to be accurate.

To Access Resource:

Questions? Email Resource Support Contact:

  • opensky@ucar.edu
    UCAR/NCAR - Library

Resource Type publication
Temporal Range Begin N/A
Temporal Range End N/A
Temporal Resolution N/A
Bounding Box North Lat N/A
Bounding Box South Lat N/A
Bounding Box West Long N/A
Bounding Box East Long N/A
Spatial Representation N/A
Spatial Resolution N/A
Related Links N/A
Additional Information N/A
Resource Format PDF
Standardized Resource Format PDF
Asset Size N/A
Legal Constraints

Copyright 2018 American Meteorological Society (AMS).


Access Constraints None
Software Implementation Language N/A

Resource Support Name N/A
Resource Support Email opensky@ucar.edu
Resource Support Organization UCAR/NCAR - Library
Distributor N/A
Metadata Contact Name N/A
Metadata Contact Email opensky@ucar.edu
Metadata Contact Organization UCAR/NCAR - Library

Author Gilleland, Eric
Hering, Amanda
Fowler, Tressa L.
Brown, Barbara G.
Publisher UCAR/NCAR - Library
Publication Date 2018-06-01T00:00:00
Digital Object Identifier (DOI) Not Assigned
Alternate Identifier N/A
Resource Version N/A
Topic Category geoscientificInformation
Progress N/A
Metadata Date 2025-07-11T19:38:30.115057
Metadata Record Identifier edu.ucar.opensky::articles:21660
Metadata Language eng; USA
Suggested Citation Gilleland, Eric, Hering, Amanda, Fowler, Tressa L., Brown, Barbara G.. (2018). Testing the tests: What are the impacts of incorrect assumptions when applying confidence intervals or hypothesis tests to compare competing forecasts?. UCAR/NCAR - Library. https://n2t.org/ark:/85065/d7z03bx6. Accessed 02 August 2025.

Harvest Source