The question that comes to mind here is, "Is the increase in year-to-year variance statistically significant?" Visually it's clear from the chart that the early decades saw more consistency in the arrival date of summer-like warmth in Fairbanks, whereas recent decades have been more variable. But given a warming climate, could random chance have produced a chart like this? Or can we conclude that there really is something going on here in terms of the year-to-year variability?
To address this question, I created 1000 simulations of the daily temperature history in Fairbanks for the full 113-year period. First I calculated the daily normals for overlapping 30-year periods throughout the history, i.e. the daily mean and standard deviation of daily maximum temperatures, and then I produced random Gaussian simulated data constrained by these normals and the observed lag-1 autocorrelation coefficient (which also varies through the year). Finally, for each simulated history I pulled out the annual dates of first 65+°F and then performed the same trendline regressions as shown in Rick's chart.
Here's an example of what I'm talking about. The first chart below shows the observed dates from the actual data - the same dates as Rick plotted, although there's a minor difference because of the way leap years are handled. The upper and lower trendlines diverge, as Rick showed. The next two charts show two of the simulations; in the first, the trendlines show the kind of slope we might expect in a warming climate, but in the second, the trendlines happen to converge. Both simulations use the same climate normals as the real history, and the differences are just caused by random chance.
Now we come to the interesting part. Based on the full sample of 1000 simulations, the charts below show the distribution of trendlines for the 10th, 50th, and 90th percentiles. On each chart I've marked the category in which the actual observed slope falls; so for example, the observed 10th percentile (early arrival of summer) has advanced much more than most of the simulations. This means that the slope of the lower trendline in Rick's chart is much steeper than we would expect based on the underlying climate normals. In contrast, the middle (50th percentile) trendline is not far off where we would expect it to be; and the 90th percentile (late arrival of summer) has advanced less than we would expect.
The final step in my analysis was to look at the distribution of the difference between the 90th and 10th percentile trendlines; in the actual history of Fairbanks the lower and upper tails have diverged, but what about the simulations? The chart below shows that nearly all of the simulations have a smaller difference in slopes; less than 1% of the simulations have a divergence as large as in the real world. In fact, it's more common for the trendlines to converge than to diverge, as the upper trendline tends to be steeper than the lower trendline (this can be seen in the charts above).
So what do we conclude? The bottom line is that the observed increase in year-to-year variability in timing of the "first warm day" appears to be highly statistically significant. Simulations based on a Gaussian model of daily temperature variations show a less than 1% chance that the observed divergence in the upper and lower trendlines could have occurred by random chance. We can also say that recent decades have seen both more unusually early arrivals of warmth and more unusually late arrivals of warmth than would be expected; if the variations were just random, we probably would have seen less change on the warm side and more change on the cool side.
Of course, all of this depends on the assumption that my Gaussian simulation model is realistic; we know the temperature distribution in Fairbanks is skewed in spring, but exploring whether that makes a significant difference here is a topic for another day.