John on November 25, 2009 at 9:54 am
This piece at American Thinker is a must read. Author Marc Sheppard digs into some of the computer code used to produce global warming trends. It’s not very reassuring stuff:
In 2 other programs, briffa_Sep98_d.pro and briffa_Sep98_e.pro, the “correction” is bolder by far. The programmer (Keith Briffa?) entitled the “adjustment” routine “Apply a VERY ARTIFICAL correction for decline!!” And he/she wasn’t kidding. Now, IDL is not a native language of mine, but its syntax is similar enough to others I’m familiar with, so please bear with me while I get a tad techie on you.
Here’s the “fudge factor” (notice the brash SOB actually called it that in his REM statement):
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7, 2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor
These 2 lines of code establish a 20 element array (yrloc) comprised of the year 1400 (base year but not sure why needed here) and 19 years between 1904 and 1994 in half-decade increments. Then the corresponding “fudge factor” (from the valadj matrix) is applied to each interval. As you can see, not only are temperatures biased to the upside later in the century (though certainly prior to 1960) but a few mid-century intervals are being biased slightly lower. That, coupled with the post-1930 restatement we encountered earlier, would imply that in addition to an embarrassing false decline experienced with their MXD after 1960 (or earlier), CRU’s “divergence problem” also includes a minor false incline after 1930.
Those are some very specific fudge factors that pretty clearly seem aimed at showing warming. Where’d these numbers come from? Maybe there’s an explanation, but on the face of it this looks like…well, cooking the books.
Related: Ed Driscoll reminds us that the NY Times had no problem linking to Sarah Palin’s hacked e-mails.
Category: Climate Change & Environment |