I've been giving some thought lately to possible methods of measuring the degree of persistence of unusual temperatures within a season or a year. For example, some months, seasons, or years are characterized by long stretches of significantly above-normal or below-normal temperatures, and others by short-lived warm or cold spells, with frequent alternation or variation. Two winters that contrasted strongly in this respect in Fairbanks are shown below.

In devising a metric for the degree of persistence, it's clear that the mean departure from normal is not particularly of interest; a season or year could end up near normal even though many long warm and cold spells were observed. I considered revisiting a measure of the average magnitude of the departures from normal, or the frequency of large departures from normal (as e.g. here), but this tells us primarily about the degree of "extremeness", not the degree of persistence. I'm specifically interested in a metric that can tell the difference between temperatures that tend to remain consistently and significantly on one side of normal, as opposed to highly variable temperatures, over the course of the month/season/year. The motivation for this is primarily to see if "persistence" has changed noticeably over recent decades.

The index I came up with - in experimental form - measures persistence by adding up the changes in mean temperature anomaly from day to day in terms of standard deviations. For example, if the temperature is 1 SD above normal one day, and 1 SD below normal the next, then +2.0 is added to the index. If the temperature is unchanged relative to normal, then nothing is added. Also, I limit the magnitude of the temperature anomalies to +/- 1 SD, because I wish to capture the idea that anomalies greater than 1 SD are all qualitatively similar - for example, we would call the weather "unusually warm" regardless of whether the temperature is 1, 2, or 3 SD above normal. Limiting the anomalies to +/- 1 SD means that the maximum daily index increment is +2.0.

After calculating all the daily increments over a period, the average is taken, and then for cosmetic purposes the "persistence index" is finally computed as the reciprocal of that average, minus 0.5. This conversion is desirable so that higher values reflect greater persistence, and so that we get a zero index value ("zero" persistence) in the hypothetical case when temperatures flip from +1 SD to -1 SD every single day.

If you're still with me, the result is an index that is quite easy to calculate and, I think, fairly reflects the degree of persistence within a chosen time period. The chart below shows the annual value of the index for the November-March season in Fairbanks, and the 10-year trailing mean is shown in red. It's interesting to observe that the 10-year mean remained nearly stationary from the 1930s through the early 60s, then it was somewhat higher until the late 80s as there were no low-persistence winters from 1969 through 1989. However, in the last 25 years the index has been lower again on average, but with strong variability. The winter of 1980-81 saw the highest degree of persistence, and 2008-2009 the lowest.

The chart below shows that the persistence index is only weakly connected to mean temperature anomaly, which is a good thing as I'm interested in how persistence varies independently of mean temperature. We can see that the most anomalous winters are associated with high persistence, and this is obviously unavoidable, but there is little correlation over most of the range of temperature.

The climatology of the persistence index is shown below, based on the long-term mean for each overlapping three-month "season". It's not too surprising that persistence is highest in winter and lowest in summer; temperature anomalies tend to stick around longer in winter. This is presumably because tropospheric Rossby waves have longer wavelengths in winter and therefore move more slowly.

Finally, the annual mean persistence index shows some interesting decadal-scale variations (see below), but the most recent two decades have been generally unremarkable compared to the earlier history.

I'd welcome ideas from readers on alternative ways to measure persistence. The next step is to apply the procedure to upper-air data so that we can find out if upper-air flow patterns have become more or less persistent over time.

All you are really doing is seeing how often the normalized data crosses the axis. I think a more accurate but much more complicated analysis would be doing a wavelet analysis. Since you have enough points for a good sample, do a Fourier transform and measure the dispersion and size of the transformation. Years like 1980 with long persistence will have large amplitudes in the small frequencies. Whereas years like 2008 will have large amplitudes in the small frequencies. I haven't played with signal processing in years but there is established software tools, the theory is well developed, and you could even apply error boundaries if you worked hard enough.

ReplyDeleteAn excellent suggestion Eric. I found a very handy online wavelet analysis tool. It is found here: http://ion.exelisvis.com/ . I am throwing all kinds of data in there and a few things really stand out – like the 40-year periodicity of the PDO. More to come ...

DeleteThat's a nice little tool. But be careful that you use it properly. The link that Gary gave gives a excellent tutorial on wavelets. In it the author highlights a few pitfalls. Mainly, you should pad with zeroes to avoid ringing of data, make sure that you have enough data points, and be aware that you can undersample the frequencies if you pick a minimum frequency that is too low.

DeleteInteresting concept Eric. Something new to read about and learn.

ReplyDeletehttp://paos.colorado.edu/research/wavelets/wavelet1.html

Gary

The main page from which the above was extracted is listed at the bottom of the initial link. Just in case someone missed it:

Deletehttp://paos.colorado.edu/research/wavelets/

Gary

Thanks Eric, that's an interesting suggestion. I agree the relative contribution of different frequencies is basically what I'm interested in, although as you point out it is specifically the frequency of changes near the "normal" line. I would be much less interested in contributions to the spectra from anomalies beyond +/- 1 SD.

ReplyDeleteI will have to examine whether there would be a way to summarize the information from a wavelet analysis so that different years can be easily compared - i.e. can I create an index that tells me what I want to know.