Moth Eyes

Navigating a demon-haunted world

Dr. Eugenie Scott on Evolution and Global Warming Denialism

Here’s a fantastic lecture from Dr. Eugenie Scott on the similarities of tactics and thinking between those who reject global warming and/or evolution:

HT: Greg Laden

September 28, 2011 at 9:13 pm Comments (0)

An amazing discovery

Tom Nelson has made an amazing discovery, sure to shock the scientific and mathematical worlds!

Apparently, if you look at a bunch of different numbers, some of them are higher than average! Shock! Someone alert the statistics community, they’ve got to know about this! Of course it would be copied onto WUWT and Climate Depot. Is there any notion of quality control on those sites?

Here’s a map (from NASA’s GISS). The average, as you can see, is an anomaly of 0.51°C. You’ll notice that some regions have a larger anomaly than this, they’re coloured orange, red and dark red (as well as probably most of the medium-orange ones). Those regions are warming faster than average. The others aren’t.

Oh, and part of Nelson’s post is a link to a completely off-topic post on Mars on the website of some scumbag.


July 26, 2010 at 12:26 am Comments (0)

Was the Arctic Ice Cap ‘Adjusted’?

Over at “American Thinker”, Randall Hoven has a post about the Arctic ice caps and, specifically, the difference between the “area” and “extent” values for the size of these. The problems start with the interpretation of a graph much like this:

Now, you probably noticed the substantial discontinuity in the “area” during 1987. This is even more apparent if you look purely at the difference between extent and area:

I’ve also plotted the difference between the extent and area for the entire period (taken from the bootstrap data):

Now, Randall includes an “Important Note” from raw data which explains that:

Important Note: The “extent” column includes the area near the pole not imaged by the sensor. It is assumed to be entirely ice covered with at least 15% concentration. However, the “area” column excludes the area not imaged by the sensor. This area is 1.19 million square kilometres for SMMR (from the beginning of the series through June 1987) and 0.31 million square kilometres for SSM/I (from July 1987 to present). Therefore, there is a discontinuity in the “area” data values in this file at the June/July 1987 boundary.

So the discontinuity exists because, from the start to mid-1987, data is taken from the SMMR, which did not have any data for 1.19 million square kilometres in the polar regions (a “pole hole”), whereas it was then replaced by the SSM/I instrument, which only missed 0.31 million square kilometres. As the “area” figure does not account for this, and as at least most of that area will be covered in sea ice, there will be almost 0.88 million square kilometres of extra sea ice from the middle of 1987 onwards, purely due to the instrument change. So from mid-1987, the area figure includes the ice from an additional area 0.88 million square kilometres. So, obviously, if I remove this difference, the discontinuity disappears:

Looking at this, there is still substantial variation in the difference between “extent” and “area” figures. Randall asks why:

What were the differences? From the above words from the NSIDC, you would think that the differences would be constant offsets (1.19 million sq km from 1979 through June of 1987, and 0.31 million since). But the actual differences in the data file were not constant at all; they varied between 1.93 and 3.42 million sq km.

Notice, however – it shows up particularly clearly with the complete data set – that these differences are clearly changing on an annual cycle (plus some variation – weather). And there’s no reason to assume that “extent” and “area” are measuring exactly the same thing. So, if we check how the NSIDC define these terms, we learn:

In computing the total ice-covered area and ice extent with both the NASA Team and Bootstrap Algorithms, pixels must have an ice concentration of 15 percent or greater to be included. Total ice-covered area is defined as the area of each pixel with at least 15 percent ice concentration multiplied by the ice fraction in the pixel (0.15 to 1.00). Total ice extent is computed by summing the number of pixels with at least 15 percent ice concentration multiplied by the area per pixel, thus the entire area of any pixel with at least 15 percent ice concentration is considered to contribute to the total ice extent.

These, obviously, are different figures, as in each pixel “area” is weighted by its concentration, and this would presumably be higher in winter – which is exactly what we see in the differences. Randall, on the other hand, resolves the issue by completely ignoring it.

Going back to the March data, before adjusting for the “pole hole”, like Randall, I find it actually has a slight positive trend:


However, after adding the pole hole region, I get a much stronger downwards trend in the quantity of sea ice:

Now, I’ll emphasise that this isn’t (necessarily) accurate, some (unknown to me) portion of the pole hole might not contain sea ice during March. That data obviously exists, but I don’t have the time at the moment to try and analyse it.

And all of this means that the rest of Randall’s conclusions are invalid, being that they are based on a false premise.

Actually, the rate of growth is statistically insignificant, meaning that a statistician would say that it is neither growing nor shrinking; it just bobs up and down randomly. More good news: no coming ice age, either.

No, there is definitely a significant trend.

You see that “extent” always shows more shrinkage than “area” does. In the months of maximum sea ice, February and March, the area trend is upward. And for winter months generally, December through May, any trend in area is statistically insignificant. For summer months, July through October, the trend is downward and statistically significant.

But these calculations are all based on extremely biased data for the start of the period, and so are all wrong.

Katie Couric should have used the month of September as her example. In three decades, the Arctic sea ice “extent” shrank by 34%. She could make such claims while stating, truthfully, that the data come from NSIDC/NOAA and the trend is statistically significant. It’s science.

Despite the sarcasm dripping from this sentence, yes, it is science. The Arctic ice is melting. Without the “pole hole”, September looks like this:

As you can see, my trend line isn’t a very good fit to this data, and, as Randall says, any decrease seems to be in just the last few years. After adding in the pole, however, things look a lot different:

Again, the red line represents “area,” the only thing actually measured. A downward trend is evident to the eyeball. But look closely and that downward trend is fairly recent — say, since 2000. Indeed, the calculated trend was slightly upward through 2001. That is, the entire decline is explained by measurements since 2002, a timespan of just eight years.

But the older data was biased, so the downward trend was actually for the whole period, and somewhat stronger, to boot.

To understand the trend, you need to understand the data you’re looking at. Or, as the readme file for the data Randall Hoven looked at put it: “we recommend that you read the complete documentation in detail before working with the data”. Had Randall done that, and checked the meanings of “area” and “extent” before writing this piece, he could have saved himself a lot of bother and embarrassment.

Randall starts his conclusion like this:

This little Northern Hemisphere sea ice example captures so much of the climate change tempest in microcosm.

And that’s very true. Someone looking at data then didn’t understand, analysing it improperly, and reaching strong but extremely false conclusions as a result. And then, even when corrected on the misunderstanding, continuing to believe those conclusions.

See, as I was writing this post, Randall posted a correction on his site. It turns out that he’d found the definitions of area and extent (technically, he still got the definition of area slightly wrong, but it’s not as bad). However, although pointing out these problems with his main article, he tries to recover the point with this:

If we add the “pole hole” back to the measured “area,” we would get a downward trend in area due to the change in pole hole size in 1987. If we assume that the pole hole is 100% ice, then the downward trend in March would be 2.2% per decade. But if we assume that the pole hole is only 15% ice (the low end of what is assumed), then the downward trend is only 0.1% per decade, which is not statistically significant. (The corresponding downward trend for “extent” was 2.6% per decade.)

It is true that whatever downward trend there is for March is due only to these adjustments (assumed pole hole size and concentration). And whether that trend is statistically significant depends on ice concentration in the “pole hole,” an assumed value.

For a start, it seems to me to be a fairly reasonable assumption that the ice content of the pole hole is towards the high end of the range – after all, that’s the bit of the Arctic closest to the North Pole. And the thing is, that assumption is a pretty darn testable one. All you have to do is go to the North Pole and look. Come to think of it, I’d be willing to bet that someone already has.

Image from Wikipedia

April 8, 2010 at 2:34 am Comments (0)

Hiding the rise

Alternative title: The complete idiot’s guide to cherrypicking.

Willis Eschenbach (of Darwin Zero fame) has a post on Watt’s Up With That concerning the homogenisation process in Anchorage and Matanuska (both in Alaska). Matanuska was chosen for being close to Anchorage. But why start in Anchorage? No explanation is given. Something about this smells like cherrypicking – picking out a station that happened to have an odd-looking trend and expanding a conspiracy theory around it.

Well, two can play at this game. I wrote a little program to find stations that had a downward trend in homogenisation in the GHCN v2 data. Not anything even approaching being clever, I just picked a few of the stations that had a reasonable amount of data from the last 40 years and had less homogenisation in 2009 than in 1970. But let’s say I didn’t give that explanation, and just talked about the odd trend in Asheville? That would hardly be methodologically valid.

Here’s how the temperatures in Asheville, North Carolina, were homogenised:

I can’t explain the homogenisation in Asheville. Would it be reasonable to suppose that an AGW denialist hacked into the GHCN website and modified the data? Well, no. Whereas Eschenbach was excited by a .7°C increase spread out over 20 years, here we had a .375°C decrease in a single year. Note how this causes the homogenised line (red) to drop away from the raw data (blue).

What about Pohang, in South Korea?

These major drops cut off what had looked like a warming trend. Certainly Pohang was not homogenised to deliberately create an artificial warming trend! In two bursts within just 8 years (and at the start of the data set), temperatures are adjusted upwards by a startling .65°C!

And then there’s Cairns Airport. What the heck is going on here?

Is homogenisation being used to hide global warming all over the world?

Why is data being homogenised like this? Well, it’s unfortunate, but most of the various temperature stations were not set up to track climate. As such, they periodically got moved, changed, located in places that were more convenient to measure for the daily news rather than for tracking world climate, and so forth. This means that before trying to use these for studying climate it’s probably worthwhile controlling for these factors.

But, seriously, there are a whole bunch of factors, and between them they could cause adjustments – both up and down. And if you look at enough sites, you’ll no doubt find examples of both. So if someone is showing you one site’s homogenisation and asking you to draw conclusions of fraud on the basis of it – why that site? Why not any of the thousands of others? Are all the sites homogenised in the same direction? Or is it simply more likely that you are listening to someone who is cherry-picking, finding any anomaly and then wrapping it in a conspiracy and their own absolute certainty that global warming is a lie and that the scientists who give evidence in favour of it are liars and frauds?

February 23, 2010 at 3:28 am Comments (2)

Is something rotten in Alaska?

Via an open thread on Deltoid, I discovered a link to this article by E.M. Smith (reposted on Watt’s Up With That), looking at an odd map he’d managed to generate using a temperate map generator on the NASA GISS site. The map generator’s pretty fun to play with.

A map of the temperate anomalies can be generated by entering a base period and a time period. As I understand it, it then takes the difference in mean temperature between the baseline period and the time period and draws that on a map. Pretty simple, right?

Now, if you use the same base period as time period, you’d expect that the map anomalies would all just be zero, right? Well, almost. The default settings exclude ocean data, and E.M. Smith does not change that. Without the ocean data and a 250km smoothing radius, you actually get the following map:
Temperature anomalies, time period 1951-1980, base period 1951-1980

What’s going on? Well, the short answer is that in the GHCN data, 9999 is used as a flag value to designate missing data (see the help file at the bottom of a map page, “Missing data are marked as 9999.”). As there’s no ocean data, 9999 appears there. Now, probably those should be greyed out. In maps that have a different base period and time period, grey is used to designate regions that don’t have any data.

However, the simple fact that this was almost certainly just displaying a flag value didn’t stop the conspiracy! Oh no! Presumably, those 9999 values are leaking into the real graphs and causing all the red values in a map like this one:
Temperature anomalies, 2009, base period 1951-1980

Nice idea. So I ran with it. Don’t know how long this GISS map stays up on their site, but I just did 2009 vs 2008 baseline. The “red” runs up to 10.3 C on the key.

http://data.giss.nasa.gov/work/gistemp/NMAPS/
tmp_GHCN_GISS_250km_Anom12_2009_2009_2008_2008/
GHCN_GISS_250km_Anom12_
2009_2009_2008_2008.gif

So unless we’ve got a 10 C + heat wave compared to last year, well, I think it’s a bug

So I think this points to the ‘bug’ getting into the ‘non-NULL’ maps. Unless, of course, folks want to explain how it is 10 C or so “hotter” in Alaska, Greenland, and even Iran this year: what ought to be record setting hot compared to 1998…

I’ll leave it for others to dig up the actual Dec 1998 vs 2009 thermometer readings and check the details. I’ve got other things taking my time right now. So this is just a “DIg Here” from me at this point.

It’s not the color red that’s the big issue, it is the 9999 C attached to that color… Something is just computing nutty values and running with them.

BTW, the “missing region” flag color is supposed to be grey…

Now, this is something of a leap: how unlikely is it that unusual values in the ocean would magically happen to manifest themselves as warming in Alaska or Greenland – let alone Iran – rather than in, oh, say, the oceans. Never mind the idea that a modest change in temperatures between two years is especially unlikely. But, even though this claim is extremely unlikely, let’s do a little investigating.

So, the question: was the temperature in Alaska during December 2009 really 4-12.3°C warmer than 1998, or are those 9999s leaking through? This is what NASA’s GISS temperature map shows:

Happily, this is an easy question to answer if you actually look at the data. I downloaded the unadjusted mean GHCN data for the various sites in Alaska (the headers are 42570398000-425704820011). I picked out all the sites which had data for 2009 (I’ve also uploaded the raw data for 1998, 2008, 2009 for these sites so you can look at them if you like). Note that the temperature values are in tenths of a degree.

Header Location Dec 1998 Dec 2009 Difference
425700260000 Barrow -186 -196 -10
425701330000 Kotzebue -173 -121 +52
425702000000 Nome -147 -94 +53
425702310006 McGrath -222 -182 +40
425702610000 Fairbanks -209 -198 +11
425702730000 Anchorage -100 -67 +33
425703080001 St Paul -22 -18 +4
425703160000 Cold Bay -22 14 +36
425703260000 King Salmon -125 -46 +79
425703610000 Yakutat -30 -22 +8
425703980000 Annette Island 19 12 -7

I’ve helpfully marked highlighted the differences for those sites in Alaska in the region which are particularly red in the map. There appears to be some sort of correlation. The average temperature difference between Dec 1998 and Dec 2009 at those sites is 4.9°C warmer. The darkest shade of red represents an anomaly of between 4 and 12.3°C, so, Alaska is properly represented. The average, Alaska-wide, was 2.8°C warmer.

It’s not just me. A commenter on Watt’s Up With That, carrot eater, points out:

First station I tried: Goose, Newfoundland.

http://data.giss.nasa.gov/work/gistemp/STATIONS//
tmp.403718160005.2.1/station.txt

is 8.6 C warmer in Dec 09 than Dec 08.

Let’s look for other stations in red splotches in Dec 09, compared to Dec 08

Egesdesminde, Greenland 5.1 C
Fort Chimo, Canada. 10 C

Looks like I found your 10 C difference between Dec 08 and Dec 09. Third station I tried. Hence, the range of the colorbar.

Let’s see what else we find.
Danmarkshavn, Greenland. 2.7 C
Godthab Nuuk: 5 C
Inukjuak Quebec: 6.6 C
Coral Harbor: 8.6 C

So I’ve found a bunch of stations that are between 5 and 10 C warmer in Dec 09 compared to Dec 08.

This is a fun game, after all. Let’s say I want to find the biggest difference between Dec 09 and Dec 98. There are lots of red splotches on the map, and the colorbar has poor resolution. So I’ll download the gridded data and have a look.

Scrolling past all the 9999s for missing data, and I find that I should be looking at some islands north of Russia. I try some station called Gmo Im.E.T, and I get:

Dec 09 is 12.3 C warmer than Dec 98. First try.

So, yeah, this “bug” turned out to just be a weather fluctuation. Colour me surprised.

February 2, 2010 at 11:17 pm Comments (4)

Is the party over?

#include
int main(){
printf("METHINKS IT IS LIKE A WEASEL\n");
}

I’ve spent most of this year doing an honours degree studying genetic algorithms. As such, I’ve found reading the best and brightest ID proponent’s attempts to understand the genetic algorithm equivalent of a “Hello World” program – a simple string evolver, with no crossover and only one parent per generation – to be hillarious.

Anyway, it seems that they’ve finally managed to come up with a version of the program that doesn’t consist of a partitioned search. It mutates a single character per offspring, rather than giving each locus an independent probability of being mutated, but that’s a somewhat smaller flaw than most cdesign proponentist attempts to implement the Weasel program.

And then, GilDodgen came out with this:

No search is required, because the solution has been provided in advance. These programs are just hideously inefficient means of printing out what could have been printed out when the program launched. The information for the solution was explicitly supplied by the programmer.

Well, duh. That’s because it was a toy program, purely written to illustrate the difference between pure random chance and the accumulation of small changes. You may as well say that the entire software development industry is a waste of time and money because it would be easier to just create a file containing the string “Hello world” and print it to the terminal with cat.

The Weasel program does not attempt to show that evolution can produce novel information. It merely demonstrates that difference between selection and no-selection. If you want a computer simulation to demonstrate the power of evolution to produce novel structures, you could read one of any number of papers in which genetic algorithms or genetic programming have been used to find novel solutions to real-world problems. Or, heck, even read the rest of chapter three in TBW (the one which mentions the Weasel program), which is mostly about the far more interesting Biomorphs program.

Or, if you prefer, Gil’s conclusion:

The Darwinian mechanism as an explanation for all of life is simply not credible. Most people have enough sense to recognize this, which is why the consensus “scientists” — with all their prestige, academic credentials, and incestuous self-congratulation — are having such a hard time convincing people that they have it all figured out, when they obviously don’t.

If you like, you can download my version of Weasel. It’s written in C# and you’ll need at least .NET 2.0 to run it. Source and binaries are included in that download. It uses a population size of 200 and a mutation rate of 0.05.

September 20, 2009 at 8:41 pm Comments (0)

C is for creationist, that’s good enough for Denyse

Fishing lure
Less than a month after sharing with us all an HIV denialist’s take on Darwin and evolution, Denyse O’Leary continues to spelunk further and further into the depths of evolution denialism.

Now she’s interviewing Adnan Oktar, a.k.a. Harun “fishing lure” Yahya. I think his responses basically speak for themselves.

What’s next? Updates on Ray Comfort’s search for a crocoduck? The results of Chuck Missler’s ongoing abiogenesis experiments? Reporting on how the atheist conspiracy that rules the world (but often struggles to post an ad on the side of buses) is suppressing Kent Hovind’s academic freedom? Is there any creationist ridiculous enough that Denyse O’Leary wouldn’t credulously promote them?


May 14, 2009 at 11:41 pm Comments (0)