Effects of internal variability on skill verification of seasonal precipitation outlooks — Australian Meteorological and Oceanographic Society

Effects of internal variability on skill verification of seasonal precipitation outlooks (#1011)

Andrew D King 1 , Benjamin J Henley 1 , Debra Hudson 2 , Timothy Cowan 2
  1. School of Earth Sciences, University of Melbourne, Parkville, VICTORIA, Australia
  2. Bureau of Meteorology, Melbourne, Victoria, Australia

Skill verification of seasonal prediction models is based on hindcast simulations usually performed for a recent 20-to-40-year period. For example, for ACCESS-S1 a 23-year hindcast ensemble was performed for 1990-2012. Given that Australian climate is strongly variable on inter-annual and decadal timescales, and that much of the predictive skill in seasonal outlooks for Australia is tied to the El Niño-Southern Oscillation (ENSO), this begs the question: how important is the timing and length of the hindcast in influencing skill verification results?

In our analysis we have used observational datasets and ACCESS-S1 to show that skill verification is strongly related to the choice of hindcast period. We firstly examine the average difference in observed seasonal Australia-average precipitation between La Niña and El Niño periods and we find strong decadal-scale variability in this metric associated with the Interdecadal Pacific Oscillation (IPO). In particular, there is high ENSO-related variability in spring precipitation for 1990-2012 coinciding with predominantly negative IPO conditions. Within the ACCESS-S1 hindcast we find that there is a relationship between ENSO-related spring precipitation variability and the skill of the model such that negative IPO periods with enhanced ENSO-related rainfall variations exhibit greater skill than positive IPO periods.

Our results suggest that the choice of hindcast period has implications for the representativeness of seasonal rainfall verification results for Australia and that certain hindcast windows could lead to over- or under-confidence in the model relative to the true skill. Similarly, the hindcast skill is used as a guide for the interpretation of real-time forecasts and that guide may be an over- or under-estimate of real-time skill depending on the predictability of the period in question. Lastly, the study reconfirms that it is unwise to compare the skill of forecast systems using different hindcast periods to draw conclusions about forecast system differences or improvement.

#amos2020