Guest essay by Pat Frank
For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.
The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.
Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.
..... uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.
It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.
Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.
And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.
Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.
Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything.
..... wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.
The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.
That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.
That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.
There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.
Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.
Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.
Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?
Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?
..... climate modelers:
- neither respect nor understand the distinction between accuracy and precision.
- are entirely ignorant of propagated error.
- think the ± bars of propagated error mean the model itself is oscillating.
- have no understanding of physical error.
- have no understanding of the importance or meaning of a unique result.
No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.
And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.
.....The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.
They should be nowhere near important discussions or decisions concerning science-based social or civil policies.
First, Anthony, thank-you very much for posting my essay about climate modelers. I am grateful for the opportunity.
Next, Slywolfe, if you understand the first figure of the essay, or the fourth, or the linked poster, you’ll know that climate models can’t make any predictions at all and so, ipso facto, can not “do a good job.” Unless making not-predictions is their job.
Crediting your credit, Dana doesn’t know what he’s talking about. And, as regards climate futures, neither does anyone else.
Thanks for generating a very worthwhile discussion on the GCM failures and allowing WUWT readers a “peer-reviewing” the sorry state of Climate Science manuscript peer-reviewers. Bob Tisdale and Christopher Monckton (as you may be aware) regularly update WUWT readers with GCM external failures. Your elucidation of the internal reasons for those GCM failures (along with RGBatDuke, Ferdburple, Jimbo, and many others) is very much appreciated.
I understood most of what you presented and took away a very important refresher lesson on the importance of a “unique result” in any science-based model. I also remember, that some months back someone at WUWT posted a comment that the GCM initializations used a single value for enthalpy of evaporation for 4º C water instead of 26º C as is for most of the tropical waters. They mentioned that evaporation enthalpy value error would propagate through the hundreds of iterations of the GCM’s, compounding until nothing was left but essentially a random noise signal. That made me realize that the GCMs of the IPCC are total crap, built with circular logic to deliver a politically-desired output.
Joel O’Bryan, PhD
Thanks, Joel. I’d never have thought of that water enthalpy error. One expects if all the physical errors of climate models were documented, their propagation would produce a centennial uncertainty envelope of approximately the size of North America.
I’ll make it ultra-simple for you: Predicting the future (anything) is very difficult for humans. One might as well flip a coin.
The IPCC Report Summary is leftist personal opinions formatted to look like a real scientific study.
As you can see from the formerly beloved Mann Hockey Stick chart, ‘predicting the past’ is just as difficult for the “climate astrologers” as predicting the future.
It’s a climate change cult. — a secular religion for people who reject traditional religions.
The coming global warming catastrophe scam is 99% politics and 1% science.
You can not debate a cult using data, logic and facts any more than you can debate the existence of god with a Baptist.
The long list of environmental boogeymen started with DDT in the 1960s, and as each new boogeyman lost its ability to scare people, a new boogeyman was created, and the old one was immediately forgotten.
If we are lucky, and it seems that we have been for two years so far, it will remain cold enough so the average person begins to doubt the coming global warming catastrophe predictions — thank you Mr. Sun and Mrs. Cosmic Rays, for riling up the leftist so they reveal their true bad character — with harsh character attacks on scientists who do not deserve them.
Richard, I don’t disagree with your general point.
But consider that Maxwell’s equations do a darn good job predicting the future behavior of emitted electromagnetic waves. And Newton’s theory does a good job at predicting the future positions of the planets — at least out to a billion years or so. In my field, QM does a pretty good job of predicting the details of x-ray absorption spectra before any measurement.
So, physical science has a good array of predictive theories. Climate modelers have managed to convince people that they can predict future climate to high resolution. Their claim is supported only by the abandonment of standard scientific practice. Abandonment not just in climatology, but by august bodies such as the Royal Society and the American Physical Society.
In a way the modelers themselves are innocents, because my experience shows they’re not trained physical scientists at all. They couldn’t have abandoned a method they never knew or understood. The true fault lays with the physical scientists, especially the APS, who let climate modelers get away with their ignorance and scientific incompetence.
Stop wasting your time with “climate journals”. They continue their gate-keeping while your message is being missed in the climate policy debate. Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?
“Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?”
Try finding one,….the cancer is well established.
A scientist is a person with common sense who is very skeptical about every conclusion (hypothesis) presented by scientists, including his own conclusions. A degree is not relevant — the quality of his scientific work determines whether he deserves to be called a “scientist”.
Predicting the future with computer games, has nothing to do with science.
A scientist would never focus on ONLY one variable, CO2, probably a very minor variable with no correlation with average temperature, when there are dozens of variables affecting Earth’s climate … and then further focus only on manmade CO2, for political reasons (only that 3% of all atmospheric CO2 can be blamed on humans … which is the goal of climate modelers … along with getting more government grants.)
But Big Government, who wants a “crisis” that must be “solved” by increasing government power over the private sector, could not possibly influence scientists getting government grants and/or salaries, and of course NEVER has to be disclosed as part of an article, white paper or other report by any scientist on the goobermint dole.
Pat Frank .......Climate models are like engineering models. They can be made to describe the behavior of elements of the climate within the time bounds were tuning data exist. However, they’re being used to project behavior well outside their bound. The claim is then made that they do this accurately, and that’s the problem........the (+/-) uncertainties are not temperatures. They are an ignorance width. When they become (+/-)15 C large, they just mean that the projection can’t tell us anything at all about the state of the future climate.
,,,,,,,When there is an average annual (+/-)4 Wm^-2 error in long wave cloud forcing, it means the available energy is not correctly partitioned among the climate substates.
This means that one is not simulating the correct climate, for that total energy state. That incorrect climate is then projected forward, but projected incorrectly relative to its particular and incorrect energy sub-states because the error derives from theory-bias.
So an already incorrect climate state is further projected incorrectly into the next step.
The uncertainty envelope describes the increasing lack of knowledge one has concerning the position of the simulated climate in its phase-space relative to the position of the physically correct climate. That lack of knowledge becomes worse and worse as the number of simulation steps increases, because of the unceasing injection and projection of error.
The uncertainty grows without bound, because it is not a physical quantity. It is an ignorance width. When the width becomes very large, it means the simulation no longer has any knowable information about the physically true climate state.
Such results are not nonsensical. They are cautionary; or should be.