At the HuffingtonPost, meteorologist blogger Paul Douglas explains his position on climate change.
I think it's a well written explanation for his position. However, I also think it's not complete. The part about the extraordinary number of weather records being a symptom of climate change it true, but it's not the number of record temperatures, it's the frequency at which new records are set.
Consider for a moment that you have two temperature records over the same times from two different locations nearby to each other. If a record temperature is set at one, it's not so surprising that a record might also be set at the other, simply because temperature is a regional phenomenon. In statistical terms, the temperature at the two locations are correlated. Having many record temperatures at the same time in nearby locations an indication of an overall higher temperature. They are not independent measures that the overall temperature was higher twice.
Next, it's worth thinking about how often a single location record high temperature should be set if temperature only varies randomly. Consider how data is recorded and records determined. If your sample size is high temperature measurements on (for this example) July 4th, then in the first year of data a sample size of N=1 sets your standard. It a record by definition, but not a very interesting one because there is no previous record to compare it to. The second year (N=2) has a statistical expectation (average) of 50% chance of a new "highest high temperature". In the third year and after, there is only a new record if the July 4th temperature is higher than all the previous years.
Now staying with this assumption of random variation, as the record gets longer, there is a simple formula for the expected probability that the next July 4th temperature will set a new record:
Where N>1. This won't be very accurate for less than 20 or so years of data, but gets more and more accurate as the length of the temperature record increases. For example, on the 100th year of collecting data, there is a 1% chance of a new July 4th record. This is sometimes called a "100-year event" because if we consider only the most recent 100 years of data for a particular event, we can expect a new 100-year high about once in 100 years.
And finally we have the missing piece from Mr. Douglas's explanation. We shouldn't be surprised by lots of new record temperatures, but we should be surprised when long standing temperature records are broken with a much greater frequency than we expect. We might also be surprised if we see many more record higher temperatures than record lows, because if everything else is the same, then these should occur with roughly equal frequency.
I can't fault Paul Douglas for leaving that out of his article. I thought it was going to be simple concept to explain, but it took me four paragraphs to get through it. If you read through this far, I hope it makes just a little more sense than it did before, and why scientists find this sort of data convincing of climate change. Meteorologists and climate scientist (and occasionally statisticians) have more complex ways of looking at this sort of data, which take advantage of the correlations and statistical dependencies to form a bigger picture of how climate is changing.
Image: Huffington Post |
Next, it's worth thinking about how often a single location record high temperature should be set if temperature only varies randomly. Consider how data is recorded and records determined. If your sample size is high temperature measurements on (for this example) July 4th, then in the first year of data a sample size of N=1 sets your standard. It a record by definition, but not a very interesting one because there is no previous record to compare it to. The second year (N=2) has a statistical expectation (average) of 50% chance of a new "highest high temperature". In the third year and after, there is only a new record if the July 4th temperature is higher than all the previous years.
Now staying with this assumption of random variation, as the record gets longer, there is a simple formula for the expected probability that the next July 4th temperature will set a new record:
Probability of a new record on the Nth year = 1 / N
Where N>1. This won't be very accurate for less than 20 or so years of data, but gets more and more accurate as the length of the temperature record increases. For example, on the 100th year of collecting data, there is a 1% chance of a new July 4th record. This is sometimes called a "100-year event" because if we consider only the most recent 100 years of data for a particular event, we can expect a new 100-year high about once in 100 years.
And finally we have the missing piece from Mr. Douglas's explanation. We shouldn't be surprised by lots of new record temperatures, but we should be surprised when long standing temperature records are broken with a much greater frequency than we expect. We might also be surprised if we see many more record higher temperatures than record lows, because if everything else is the same, then these should occur with roughly equal frequency.
I can't fault Paul Douglas for leaving that out of his article. I thought it was going to be simple concept to explain, but it took me four paragraphs to get through it. If you read through this far, I hope it makes just a little more sense than it did before, and why scientists find this sort of data convincing of climate change. Meteorologists and climate scientist (and occasionally statisticians) have more complex ways of looking at this sort of data, which take advantage of the correlations and statistical dependencies to form a bigger picture of how climate is changing.
No comments:
Post a Comment