Wednesday, December 2, 2009

Ending The Mystery Around Climate Change

There are three basic steps needed to predict existence or non-existence climate change: (1) Gathering and compiling the historical temperature data; (2) smoothing and correcting the historical data (e.g., assigning relative weights to some data that may cover larger geographic regions than other data, or adjusting for changes in the locations or surroundings of temperature-recording stations over time); and (3) running the smoothed and corrected historical data through a complex computer model.

The first step involves no science. The second step involves very little science. Both of these steps can and should be open to participation and review by the general public. The mechanisms for doing so already exist and are nearly costless.

The University of East Anglia’s Climate Research Unit states that it has lost or discarded the data from step 1 after it completed step 2. It refers to the data it created in step 2 as “value added data.” It is this value-added data that it has run through its computer models to reach its predictions.

Science demands that Step 1 be verifiable and reproducible. There is an easy way to accomplish this: All the historical data should be copied (most of it exists in the form of handwritten logs) and stored on the Internet. This is exactly the sort of process that Google excels (no pun) at. All the world’s temperature-recording stations should photocopy their logs and send them to Google.

Assuming each station has an average of one page of temperature readings per month for the last hundred years, and assuming there are one thousand stations, then this amounts to 1.2 million pages of data, a trivial amount for Google to handle. A page of characters requires about 2 kilobytes of storage. 1.2 million pages require less than 3 gigabytes of memory. An iphone has 32 gigabytes of memory. Even if the data were stored as photographs, requiring significantly more storage, this would still be a very small undertaking for Google. (Google already has satellite photos of virtually every square foot of the entire world available on-line for free.)

Next, the handwritten data should be put into spreadsheet form. This would significantly compress the data. (The collective public would be able to verify that the information from the photocopies of the logs was accurately transferred into spreadsheet form.) At one data point per day, one hundred years of data points would require a spreadsheet with 36,500 cells, one spreadsheet for each station.

Next step: Wikipidia. The on-line encyclopedia could create an entry for each recording station. Each entry should have a link to a Google spreadsheet with the station’s data, and whatever historical narrative the recording station has provided. For example, there would be information explaining how the location or procedures changed over the years. Was the recording station in Central Park moved over the years, etc?

This narrative would then be available for those wishing to adjust and smooth the data (i.e., add value). There may be debate about how and why the raw data should be adjusted, but at least that debate could be in plain view of the general public. Someone with great historical knowledge of Central Park might be able to provide valuable insight into questions about why the data from 1958 look different than the same year’s data from another nearby location.

In sum, the process of getting through Step 1 and Step 2 can be easily placed in the public domain where it not only belongs, but also can be better handled. The final step would be for scientists to offer the details of their mathematical models for analyzing the value-added data set and to offer their predictions. It may very well be that less than one in one thousand people are capable of understanding such models and the math and science behind them, but the planet has six billion people, six million of whom qualify as being a “one in one thousand.”

No comments:

Post a Comment