Saturday, October 25, 2014

Fairbanks Forecast Performance - Part 2

In an earlier post I began looking at the performance of NWS temperature forecasts for Fairbanks, with a particular focus on whether the forecasts show enough of a "signal" at the end of the short-term forecast period.  On average through the year, the forecast errors at Day 7 are about 20 percent smaller than they would be if the forecast just called for "normal" every day, so the forecasts are clearly useful even out to Day 7.  But do the forecasts show "enough" departure from normal or are they too heavily weighted towards climatology?  The first post showed that the scaling is about right; the NWS forecasts are close to optimal in this regard.

There is more analysis that we can do, however, if we bring in the computer model forecasts and compare them to the NWS forecasts.  For this purpose, I've extracted the GFS and ECMWF computer model forecasts of 850 mb temperature for every day since mid-August 2013 (when I started collecting the data).  The NWS forecasts tend to track with the 850 temperature forecasts, as we would expect, but the following chart shows a hint of something interesting (detailed explanation is below):

The chart shows the average of the Day 7 temperature anomalies (departure from normal) predicted by the two models on the x-axis, and the error of the Day 7 NWS forecast on the y-axis; and the chart only shows days when the model anomalies have the same sign and agree to within 4 °C.  So I've excluded many cases when the models disagreed, because I'm attempting to isolate what happens when the models agree reasonably well.

There is a lot of scatter, of course, and the overall correlation is very weak, but notice the frequency of points above the horizontal zero line when both models expect very cold conditions; the NWS forecast tends to be too warm (not cold enough) in these cases.  On the right-hand side of the chart, there are far fewer cases with comparable warm anomalies in the model forecasts, but in the top five events it seems the NWS was too cold (not warm enough).

My interpretation of the results is that the NWS forecast has a tendency to be too conservative when both of the leading computer models agree in predicting a very large temperature anomaly.  If both models are very cold, then the NWS forecast ought to be lower; and if both models are very warm, then the NWS forecast ought to be warmer.  The conclusion is tentative because of the scatter in the data, but it does make sense: when the two independent models both show a large signal, then this considerably raises the chance that something very unusual will occur; and it seems the NWS forecast anomaly should be amplified accordingly.

For comparison, it's interesting to look at the same charts using the two models individually, see below.  When either model by itself shows a large cold anomaly, there is no obvious bias of the NWS forecasts, although the data on the warm side still suggests an error pattern in the most extreme warm events.

What do I conclude from this analysis?  A general conclusion - and one that is well known - is that having access to independent model forecasts is very useful for assessing the likelihood of extreme events.  This is obviously one justification for running model ensemble systems such as the GFS ensemble forecast, but using a completely independent system like ECMWF provides even more valuable information.

The more specific conclusion is that there is some potential to improve the Day 7 temperature forecasts in Fairbanks when the GFS and ECMWF forecasts are closely aligned in showing a large temperature anomaly.  In other words, the degree of agreement between the models is itself a useful predictor and should be part of the forecast process.  Each model by itself has limited skill at day 7, but when the models line up, then this sends a signal that predictability is higher, and the forecaster would do well to pay attention.

No comments:

Post a Comment