My previous post on sea-surface temperature (SST) differences between HadSST and ERSST generated a lively discussion. Some, this author included, asserted that the Hadley Centre HadSST record and NOAA’s ERSST (the Extended Reconstructed Sea Surface Temperature) record could be used as is, and did not need to be turned into anomalies from the mean. Anomalies are constructed by taking a mean value over a specified reference period, for a specific location, and then subtracting this mean from each measurement at that location. For the HadSST dataset, the reference period is 1961-1990. For the ERSST dataset, the reference period is 1971-2000.

Most SST measurements are made by moving ships, buoys, or Argo floats, the reference mean is done on a specific location from a variety of instruments and at a variety of depths. In the case of HadSST, the reference is computed for 5° by 5° latitude and longitude “grid cells.” The cells are 308,025 square kilometers (119,025 square miles) at the equator, a square that is 556 kilometers (345 miles) on a side. The distance between each degree of longitude gets smaller as we move closer to the poles, but at 40° latitude, north or south, 5° of longitude is still 425 kilometers or 265 miles. These “reference cells” are huge areas in the mid to lower latitudes, but they are small near the poles.

To make matters worse, the technology used in the reference periods, either 1961-1990 or 1971-2000, is far less accurate than the measurements made today. In fact, NOAA weights Argo float and drifting buoys, introduced in the early 2000s, by 6.8X, relative to the weight given to ship’s data (Huang, et al., 2017). The Hadley Centre says that Argo floats reduce their uncertainty by 30% (Kennedy, Rayner, Atkinson, & Killick, 2019). During the two reference periods almost all the data was from ships. This means that the greater inaccuracy of the measurements, relative to today, in the 30-year reference periods is significant. We might assume that the additional uncertainty is random, but that is unlikely to be the case.

On land, all measurements made in the reference period can be from the same weather station. That weather station may have stayed in precisely the same location the whole time. There are serious problems with many land-based weather-stations, as documented by Fall, Watts and colleagues (Fall, et al., 2011), but at least the weather stations are not constantly moving. Land-based stations are fixed, but their elevations are all different and since air temperature is a function of elevation, creating anomalies to detect changes and trends makes a lot of sense. Weather stations, on sea and land, are distributed unevenly, so gridding the values is necessary when coverage is insufficient. In some areas, such as the conterminous United States (CONUS), there are so many weather stations, that arguably, gridding is unnecessary and, if done, can even reduce the accuracy of the computed average temperature trend.

CONUS occupies 3.1 million square miles and has 11,969 weather stations in the GHCN (Global Historical Climatology Network). This is about 260 square miles per station. Each station provides roughly 365 observations per year, more in some cases, at least 4.4 million observations. This amounts to about 1.4 observations per square mile. The coverage is adequate, the stations are at fixed locations and reasonably accurate. The world ocean covers 139.4 million square miles. In 2018, HadSST had a total of 18,470,411 observations. This is about 0.13 observations per square mile, or 9% of the coverage in the conterminous U.S.

Any computation of an average temperature, or a temperature trend, should be done as close to the original measurements as possible. Only the corrections and data manipulations required should be done. More is not better. Sea-surface temperature measurements are already corrected to an ocean depth of 20 cm. Their reference depth does not change. The source and the quality of the measurements at any ocean location changes constantly. The computation of the reference period temperature is not made from one platform, not even one type of equipment or at one depth so the reference is very prone to error and severe inconsistencies. Who is to say the reference temperature, that is subtracted from the measurements, is as accurate as the measurement? It is generally acknowledged that buoy and Argo float data are more accurate than ship data and by 2018, the buoy and float data are more numerous, the reverse was true from 1961-1990 (Huang, et al., 2017).

On the face of it, we believe that turning accurate measurements into inaccurate anomalies is an unnecessary and confounding step that should be avoided. Next, we summarize how the anomalies are calculated.

HadSST anomalies

First the in-situ measurements are quality-checked, and the surviving measurements are divided into 1° x 1° latitude and longitude, 5-day bins. The five-day bin is called a pentad. There are always 73 pentads in a year, so leap years have one 6-day “pentad” (Kennedy, Rayner, Atkinson, & Killick, 2019). The pentads are grouped into pseudo-months and augmented by monthly values from cells partially covered in ice. Finally, each one-degree pentad is turned into an anomaly by subtracting its mean from 1961-1990 mean. The one-degree pentad anomalies are named “super-observations” (Rayner, et al., 2006). Finally, the one-degree pentads are combined with a weighted “winsorized mean” into a monthly five-degree grid that is the basic HadSST product. An attempt to correct all the measurements to a 20 cm depth is made prior to computing the monthly mean value for the five-degree grid cell.

Over the past twenty years, the average populated five-degree cell has had 761 observations, which is one observation every 156 square miles (404 sq. km.) at the equator. We subjectively consider this good coverage and consider the populated cells solid values. However, as we saw in our last post, not every five-degree cell in the world ocean has a grid value or observations. In round numbers, only 37% of the world ocean cells have monthly values in 2018, this is 8,186 monthly ocean cells of 22,084. Notice that the polar cells, which are most of the cells with no values are small in area, relative to the mid-latitude and lower latitude cells. Thus, the area covered by the populated cells is much larger than 8,186/22,084 or 37% of the ocean. I didn’t compute the area covered, but it is likely more than half of the world ocean.

ERSST anomalies

The basic units used in constructing the ERSST dataset are 2°x2° latitude and longitude monthly bins. A 1971 – 2000 average of quality-controlled measurements is computed for each bin. This average is subtracted from each measurement taken in the bin to create an anomaly. After this is done the various measurements (ship, buoy, and Argo) are adjusted to account for the global average difference in their values. The adjusted values are then averaged into 2°x2° monthly “super observations.” Buoy and Argo data are weighted by a factor of 6.8X the ship observations (Huang, et al., 2017). Since the year 2000, Argo and buoy data have dominated the ERSST dataset, both in quality and quantity. This is easily seen in Figure 1 of our last post, as the Argo dominated University of Hamburg and NOAA MIMOC multiyear temperature estimates fall on top of the ERSST line. This is also verified by Huang, et al. (Huang, et al., 2017).

The 2°x2° bins used for ERSST are 19,044 square miles or 49,324 sq. km. at the equator. Once the ERSST gridding process is completed and the interpolations, extrapolations and infilling are complete, 10,988 cells of 11,374 ocean cells are populated. Only 3% are null, compare this to the 63% null grid cell values in HadSST. The number of observations per cell was not available in the datasets I downloaded from NOAA, but this is less important in their dataset, since they use a complicated gridding algorithm to compute the cell values.

The justification for creating SST anomalies

No justification for creating SST anomalies is offered, that I saw, in the primary HadSST or ERSST references. They just include it in their procedure without comment. One reason we can think of is that anomalies make it easier to combine the SSTs with terrestrial records. Anomalies are needed on land due to weather station elevation differences. But this does not help us in our task, which is to determine the average global ocean temperature trend. Land temperatures are quite variable and only represent 29% of Earth’s surface.

In the WUWT discussion of my last post, Nick Stokes (his blog is here) said:

“Just another in an endless series of why you should never average absolute temperatures. They are too inhomogeneous, and you are at the mercy of however your sample worked out. Just don’t do it. Take anomalies first. They are much more homogeneous, and all the stuff about masks and missing grids won’t matter. That is what every sensible scientist does.

So, it is true that the average temperature is ill-defined. But we do have an excellent idea of whether it is warming or cooling. That comes from the anomaly average.”

So, even though the reference periods, 1961-1990 for HadSST and 1970-2000 for ERSST, are computed using clearly inferior and less consistent data than we have today, we should still use anomalies because they are more homogenous and because “every sensible scientist does” it? Does homogeneity make anomalies more accurate, or less? Nick says anomalies allow the detection of trends regardless of how the area or measurements have changed over time. But, the anomalies mix post Argo data with pre-Argo data.

As we saw in the last post the anomalies show an increasing temperature trend, but the measurements, weighted to Argo and drifting buoy data by 6.8X, show a declining temperature trend. Which do we believe? The recent measurements are clearly more accurate. Huang, et al. call the Argo data “some of the best data available.” Why deliberately downgrade this good data by subtracting inferior quality reference means from the measurements?

Nick explains that the anomalies show an increasing temperature trend because, in his view, the climate is actually warming. He believes the measured temperatures are showing cooling because the coverage of cold regions is improving over time and this creates an artificial cooling trend. The cooling trend is shown in Figure 1, which shows plot of measured HadSST and ERSST temperatures over the same ocean region. Only 18% of the world ocean cells, in 2018, are represented in Figure 1, mostly in the middle latitudes. The ocean area represented in Figure 1 is much larger than 18%, because the missing northern and southernmost cells cover smaller areas.

Figure 1. The ERSST and HadSST records over the same ocean area. Both show declining ocean temperatures. The least squares lines are not to demonstrate linearity, they are only to compute a slope. Both trends are about -3.5 degrees C per century.

The plot below shows almost the whole ocean, using the ERSST grid, which only has 3% null cells. The cells are mostly filled with interpolated and extrapolated values. The measured temperatures are heavily weighted in favor of the highest quality Argo and buoy measurements.

Figure 2. Using NOAA’s ERSST gridding technique we do see a little bit of an increasing trend in surface temperatures, roughly 1.6 degrees C per century.

The ERSST trend of 1.6 degrees per century is close to the trend seen in the HadSST and ERSST anomalies, as seen in Figure 3.

Figure 3. The HadSST and ERSST anomalies moved to the same reference period.

So, Nick has a point. Figure 2 shows the ERSST trend, which is mostly composed of extrapolated and interpolated data, but represents nearly the entire ocean. It shows warming of 1.6°/century. This is close to the 1.7°C/century shown by the HadSST anomalies and the ERSST anomalies. The real question is why the HadSST anomalies, which use the same data plotted in Figure 1 and cover the same ocean area, are increasing? ERSST is consistent between the measurements and the anomalies and HadSST is not, how did that happen? Nick would say it is the continuous addition of polar data, I’m not so sure. The ERSST populated cell count is not increasing much and it trends down over the HadSST area also.

It is more likely that the ocean area covered by HadSST is cooling and the global ocean is warming slightly. If CO2 is causing the warming and increasing globally, why are the mid- and low-latitude ocean temperatures decreasing and the polar regions warming? See the last post to see maps of the portion of the oceans covered by HadSST. One of the maps from that post is shown in Figure 4. The white areas in Figure 4 have no values in the HadSST grid, these are the areas that do not contribute to Figure 1. The area colored in Figure 4 has a declining ocean temperature.

Figure 4. The colored area has values, these values are plotted in Figure 1. The white areas have no values.

By using anomalies, are we seeing an underlying global trend? Or are anomalies obscuring an underlying complexity? Look at the extra information we uncovered by using actual temperatures. Much of the ocean is cooling. Globally, perhaps, the ocean is warming 1.6 to 1.7 degrees per century, hardly anything to worry about.

Another factor to consider, the number of HadSST observations increased a lot from 2000 to 2010, after 2010 they are reasonably stable. This is seen in Figure 5. Yet, the decline in temperature in Figure 1 is very steady.

Figure 5. Total HadSST observations by year.

Conclusions

One thing everyone agrees on, is that the ocean surface temperature trend is the most important single variable in the measurement of climate change. It should be done right and with the best data. Using inferior 20th century data to create anomalies generates a trend consistent with the ERSST grid, which is a reasonable guess at what is happening globally, but there is so much interpolation and extrapolation in the estimate we can’t be sure. The portion of the ocean where we have sufficient data, the HadSST area, has a declining trend. This is something not seen when using anomalies. The declining trend is also seen in ERSST data over the same area. This suggests that it is not the addition of new polar data over time, but a real trend for that portion of the world ocean.

Probably the full ocean SST is increasing slightly, at the unremarkable rate of about 1.6°C/century. This shows up in the anomalies and in the ERSST plot. But this ignores the apparent complexity of the trend. The portion of the ocean with the best data is declining in temperature. Bottom line, we don’t know very much about what ocean temperatures are doing or where it is happening. Since the ocean temperature trend is the most important variable in detecting climate change, we don’t know much about climate change either. Nick was right that anomalies were able to pick out the probable trend, assuming that ERSST is correct, but by using anomalies important details were obscured.

None of this is in my new book Politics and Climate Change: A History but buy it anyway.

Download the bibliography here.

Andy May, now retired, was a petrophysicist for 42 years. He has worked on oil, gas and CO2 fields in the USA, Argentina, Brazil, Indonesia, Thailand, China, UK North Sea, Canada, Mexico, Venezuela and Russia. He specializes in fractured reservoirs, wireline and core image interpretation and capillary pressure analysis, besides conventional log analysis. He is proficient in Terrastation, Geolog and Powerlog software. His full resume can be found on linkedin or here: AndyMay