Saturday, February 13, 2016

Barrow Climate Mystery

A couple of days ago Rick Thoman sent me a very interesting plot of Barrow annual mean temperatures, showing not only the dramatic warming of recent years but also a remarkable collapse of interannual variance.  I've reproduced the essence of the chart below.  Until the early 2000s, the standard deviation of annual temperatures was fairly steady, hovering around 2°F, but in the past decade or so it has dropped precipitously as cold outliers have ceased to occur and every year falls within a much-reduced range of temperatures.



Long-time readers will recognize that the chart is rather similar to that for October mean temperatures, which have similarly become radically less variable; Rick pointed this out some years ago.  Here's an updated chart for October mean temperatures.



It's obvious, I think, that the reduction in annual temperature variance partly reflects the October change, but curiosity led me to examine all months of the year to see which other months might have contributed to the annual change.  The chart below shows the standard deviation (SD) of monthly mean temperatures for each month, both for the 1930-1990 period and for the past 10 years.  October stands out as having the greatest drop in SD, but January and February have also become much less variable.


The interesting part of this analysis is that the average of the SD decrease in all 12 months is only about 24%, whereas the annual SD has dropped by 64%.  At first glance this doesn't make sense - how can the annual variance drop so much when the individual months are, on average, only modestly less variable in the modern climate?  Part of the answer is presumably that the winter temperature variance contributes more to the annual variance, and the variance has dropped much more in winter than in summer; but it's still puzzling, as no other month besides October comes close to the percentage variance reduction that has occurred on an annual basis.

To explore this question in more detail, I did some simulations of monthly mean temperature variability and combined the months into annual temperature values to see what we would expect from random chance.  Specifically, for each year in the history I calculated the observed mean and variance of monthly mean temperatures within a +/- 10-year window and then created 1000 instances of 10-year periods by taking random samples of each month from an assumed Gaussian distribution.  Annual mean temperatures were calculated from the 12 monthly values in each instance, and I assumed that each month is independent of the next, i.e. zero autocorrelation.  Finally, for each 1000-member sample of 10-year periods I obtained the 90% confidence interval on the annual SD, and this was all repeated for each year from 1930-2015.

Don't worry if you didn't follow the details; the end result is a statistical estimate of what the annual standard deviation should be given a Gaussian distribution of monthly temperatures and assuming no correlation from month to month.  The chart below shows the result.  For the majority of the history the observed decadal variance has been considerably higher than the expected value, and for much of the time it even lay above the 95th percentile of the synthetic distribution.  This makes sense, because month-to-month temperatures are not independent and the positive autocorrelation increases the variance.  In other words, a very warm month is more likely than not to be followed by another warm month, so annual temperatures vary more than you would expect if you don't account for that.



But look at what has happened in the past few years; the observed variance has dropped well below the expected value and is not far above the 5th percentile.  This means that over the past 10-15 years, annual temperatures have varied less than we would expect from the monthly temperature variability, even if there was zero autocorrelation!  Of course this could happen just by random chance - that's the point of the 90% confidence interval - but it's quite surprising.  Does it mean that the autocorrelation is now negative?  Yes - it appears that temperature anomalies separated by 6 months have been negatively correlated in Barrow in the past decade; so the monthly anomalies have tended to cancel each other out, allowing the annual temperatures to become remarkably stable.

In conclusion, there appears to be quite strong evidence that the climate in Barrow has shifted towards a more stable temperature regime in which departures from the new (warm) normal are much less persistent than in earlier decades.  It used to be that some years would see persistent unusual cold or warmth for many months, leading to larger annual anomalies, but this doesn't happen any more.  In the past 8 years, every single year has had either 5 warmer than normal months and 7 colder than normal months, or vice versa, or 6 of both (relative to the decadal normal).  This kind of behavior used to be abnormal.

I'll leave it to another post to speculate about causes of this change in Barrow's climate.  For now, it's a bit of a mystery, but an interesting one, because it exposes an aspect of climate change that I've not seen discussed before.  It would certainly also be interesting to see if the same thing has happened elsewhere in Alaska or farther afield.

3 comments:

  1. So what you are saying is that though the temperature variance throughout the year is less, we also have a lack of persistence of temperature over the following month. So winter is much warmer than normal and we can't say that next month will be just as warm or slightly cooler. Is this the correct interpretation?

    Wouldn't it be more realistic if you added some kind of dependency on the months? Perhaps use a combined ENSO-PDO index since those have a multiple month persistence. While I get why you used independent months, the varying magnitudes and lengths of correlation will change your results and confidence levels.

    ReplyDelete
    Replies
    1. Yes, I think you have interpreted this correctly. Much lower persistence from month to month appears to explain a large part of the dramatic decline in annual variance. I wouldn't have expected the month-to-month persistence to change so much over time.

      Certainly more layers of complexity could be added to the statistical model, but I think the simplest possible version reveals what is going on.

      Delete
  2. Months are an old artifact of human construct...meant for social regulation and loosely tied to solar and lunar events. No news there. Time and climate move at their own pace.

    To be correct an analysis of climate over time should exclude these arbitrary partitions and examine the flow. Unfortunately the NWS and others like to gen-up monthly and annual means of means, etc., which leaves the tedious burden of parsing the data to others.

    Gary

    ReplyDelete