As Fairbanks awaits the much-delayed first accumulating snow of the season today (according to the NWS), I wanted to look back at the forecast discrepancy I noted a couple of weeks ago to see how things turned out. The chart below shows the automated computer forecasts of daily high temperatures from the GFS MOS product (black line) and the actual outcome in red. Note that I've taken the forecasts for "day 3", for example a forecast for Thursday's high temperature produced on Monday evening.
The results are amazing: the MOS forecast was about 18-20°F too cold for more than a week. Given the usually strong performance of MOS, the error is very surprising.
I've done some digging on potential causes for the problem, and there seems to be a partial answer in the way the MOS equations are developed. First, recall that the MOS (Model Output Statistics) technique uses multiple regression to estimate relationships between observed weather variables and predictors in the "raw" model forecasts (e.g. 850mb temperature). To better capture differences in the relationships between summer and winter, the equations are developed separately for April through September and for October through March (reference here); so the regression switches on October 1 and April 1.
What does this mean for Fairbanks? Well, the vast majority of days between October 1 and March 31 have snow on the ground, and so the "winter" model will reflect the physics of a snow-covered landscape; for example, when high pressure develops with low humidity and clear skies, the model will predict a substantial drop in temperatures. However, in the minority of cases when snow is absent, then the model forecast will be too cold. In other words, the model is effectively assuming snow cover to be present, although it does not use snow cover as a predictor.
This information partially explains the huge cold bias in early October this year, because the weather pattern was just the kind of set-up that is normally cold in winter - high pressure, clear skies, and low humidity. However, if the snow-cover explanation were completely adequate, then we would expect that similar errors would have occurred in other years with no snow cover in October. The charts below show the same analysis for selected years in the past.
First, in 2015 the MOS forecasts were too warm (as expected) during the period of unusual snow cover in late September and early October, but there was no obvious bias later in October after the early snow melted off.
October of 2013 was very unusual with its lack of snow, and MOS showed a modest cold bias then, but nothing like what we've seen recently.
Finally, I looked at two earlier years in which strong high pressure occurred in combination with zero snow on the ground during October. In 2003, high pressure aloft peaked at the beginning of October, and there was a slight cold bias in the forecasts - but nothing too alarming.
In 2009, a ridge of high pressure developed on about the 7th of October, and once again the MOS forecasts were too cold, but not drastically so. Bear in mind that the GFS model and the MOS equations have changed over time, so the comparison to 2016 isn't quite fair, but if anything the newer forecasts should be better.
In conclusion, it's still not clear what caused the enormous errors in the MOS forecasts recently. Part of the problem is that the winter MOS equations aren't suitable for predicting temperatures during snow-free conditions, but in previous years this hasn't caused a major issue. The good news is the NWS forecasters ably detected the recent bias in the MOS output and adjusted their forecasts accordingly. As a meteorologist myself, it's comforting to know there's still room to improve on what the computers provide.