A Few Things Ill Considered

A layman's take on the science of Global Warming featuring a guide on How to Talk to a Climate Sceptic.

Friday, March 10, 2006

send this to... Digg it! | Technorati | Del.icio.us | Reddit | Furl | Spurl

The Models are Unproven

(Part of the How to Talk to a Climate Sceptic guide)

This article has moved to ScienceBlogs

It has also been updated and this page is still here only to preserve the original comment thread. Please visit A Few Things Ill Considered there. You may also like to view Painting With Water, Coby Beck's original fine art photography.

Labels:

46 Comments:

  • At March 16, 2006 10:37 PM, Anonymous Anonymous said…

    I think your whole blog is sort of dishonest. You pick some objection, refine it to the level of absurd, and then "successfully" rebutt your own absurd formulation. I don't believe that many reasonable people would object to use of computer-aided experiments, especially the one that crunch millions of state variables around a realistic globe. The problem is that it looks like many modelers have wrong understanding of what kind of system they are dealing with. For example, they have very loose metrics for model validation. I've seen a bunch of isotherms and hot spots they claim to be in good agreement with historical data. I would ask, by what measure?

    Actually, I have a contribution to make, maybe a couple of PhDs can be churned out of this. You are asking, "how can we hope to test a 100 year temperature projection?". As I understand, the current method is to take some scarce data from, say, 1890, and run calculations forward for 100 years, and compare the result with complete well-known (sort of) 1990 data.

    Here is the idea: (a) the GCMs are dynamical systems built mostly from first principles, Navier-Stokes equations, radiation, diffusion, and such. (b) it seems that they run into sensitivity to initial condition issue; (c) dynamical systems are reversable in time. Therefore, instead of starting from inaccurate initial data from 1890 or whatever, one need to start from today's accurate data, and calculate the model in REVERSE (provided that computer algorithms are properly implemented). Given much more accurate initial data, it is more likely that climate conditions will be restored more accurately back in time. If not, then this approach should be able to establish limits for climate forecast time.

     
  • At March 16, 2006 11:45 PM, Blogger coby said…

    Alexi,

    There are surely more well founded questions than the one I posed at the top of this article, legitimate questions one might raise about the models, there are lots of real shortcomings and areas of controversy. I make no pretense of intending to address those kinds of issues. The objections I have listed and dealt with on this blog are the common ones that come from people who just plain and simple do not know what they are talking about, but don't mind talking about it anyway. I want to deal with the stuff people like Steve Milloy and Michael Crichton spew, not Richard Lindzen. You will not hear me try to disect the statistical arguments raised by McIntyre and McKitrick. Real Climate is the place to go for the serious science, though I'm sure they would cringe to hear me refer to M&M that way!

    I do hear people all the time state these objections practically verbatim to the way I have phrased them, sometimes sincerely, sometimes in wilfull ignorance. I read these objections in the editorial pages. I hear them presented in news magazine style television. You don't need mathematics to address that kind of inaccuracy. I am not dishonestly weakening them in order to shoot them down, that is their nature. I think I will say all the above in a dedicated article.

    Now I know you suggested the "Natural CO2 sources Dwarf Human Emissions" topic and you probably do have a more intelligent angle on that than the one I argued against (though I remain unconvinced), but you are not the first person I have heard use that, and believe me no one else is thinking about strange attractors when they say it. It is used as pure and simple FUD. Something that sounds plausible to an ignorant bystander and results in revealing a little more of a very complicated system with the sole purpose of insinuating things are being hidden. It is exactly the same tactic behind the "H2O is a more powerful GHG" line. It is technically true, it is uncommon knowledge but it by no stretch means what the obfuscators try to insinuate.

    We probably should have had our exchange on the newsgroup, your arguments are deeper (though you have not at all convinced me you have a point) than the level of discourse required to deal with the 99% shallow denialism which is my target. Moving on...

    Now, you again make the assertion that the modelers don't understand the kind of system they are dealing with, but I have yet to see you support that assertion in any non-abstract, or non-philisophical way. I think it is entirely reasonable to make 100 year predictions, probably even 1000 year predictions with better understanding, of the climate system. I would be much more open to your description of the system on scales of hundreds of thousands of years.

    Let me make an analogy, about the weather. We both agree that weather is chaotic. But it is still predictable to some degree on some timeframes. I would say it is 100% predictable on a short enough timeframe, like 10 minutes, even hours depending on the desired granularity of prediction. It is totally unpredictable on a timeframe of several weeks. Prove to me that climate is unpredictable on a scale of a few centuries (ignoring huge volcanic eruptions or unforeseeable solar fluxes). I won't argue about 100K yrs, I don't care about 100Kyrs it is irrelevant to this issue.

    As for running the models backwards, I have no idea, it is an interesting notion. As a computer programmer I can tell you it is a 100% rewrite though. Don't forget that whichever direction you are going, you will never have better data about the external forcings like volcanoes and solar fluxes, methane burps and meltwater pulses. And all that is required before you should expect good success with modeling the past.

     
  • At March 17, 2006 7:30 AM, Anonymous Anonymous said…

    With respect to the models: The climate system is not time-reversable due to the presence of diffusion. For instance you couldn't start now and post-dict the cooling that occured from Pinatubo in 1991. The information from that event has dissipated. You are also incorrect in assuming that there is a sensitivity to initial conditions in the climate models. The 'climate' in these cases is defined over an ensemble or equilibrium run that specifically integrates over the chaotic 'weather'. The real world climate may or may not be chaotic (no good evidence either way), but there is no evidence that current climate models are. Note that this is a statement about the boundedness of the individual trajectories, not a statement about the individual trajectories themselves (which are chaotic).

     
  • At March 18, 2006 12:41 PM, Anonymous Anonymous said…

    Perturbations to climate are non-reversible, but the trajectory on the attractor is. If effects of Pinatubo have dissipated with no effect, then other perturbations must have similar effect (AGW anyone?). If you expect some event (or parameter shift) to have a significant impact out of proportion, you have to consider bifurcations and corresponding "catastrophic" jumps, in which case the sensitivity to initial conditions is the name of the game. Therefore, the concept of insensitivity to initial conditions is in contradiction to expectations of global climate shifts.

    I think that current climate models (especially the fully-resolved ones, with minimal parametrisations) are chaotic, but researchers make them too damped falsely atibutiting their inherent self-sustained instability to numerical artifacts, and discard those solution and parameter sets when the instability occurs.

     
  • At March 18, 2006 7:05 PM, Anonymous Anonymous said…


    ...there is no evidence that current climate models are [chaotic]. Note that this is a statement about the boundedness of the individual trajectories, not a statement about the individual trajectories themselves (which are chaotic).


    I don't understand this statement. Whether the climate system is chaotic or not is independent of the boundedness of the individual trajectories. A system does not have to be unbounded (or bounded) to be chaotic.

    I find it interesting how often supposedly competent climatologists get their basic math or physics wrong.

     
  • At March 19, 2006 10:10 AM, Anonymous Anonymous said…

    I wasn't using 'boundedness' in a technical sense, but the point I was trying to make is this:

    The Lorenz system is chaotic for individual trajectories, but the structure of the ensemble of all trajectories is stable -i.e. you always get the same 'butterfly' pattern. Small changes to the parameters result in shifts in the structure of the 'butterfly' that are empirically determinable. The 'climate' in the models is equivalent to the 'butterfly' pattern, not the individual trajectories.

     
  • At March 19, 2006 3:57 PM, Anonymous Anonymous said…

    Presumably the climate models have a bunch of free parameters, falling into two categories: physical constants and boundary conditions such as initial temperature distribution, wind velocity, etc. I don't know the field but I presume that when you speak of an ensemble you are talking about varying the boundary conditions, not the physical constants.

    So you pick a set of boundary conditions, determine the trajectory of the model under those boundary conditions, and repeat. You then average over all trajectories.

    So that raises an obvious question: if even one of those trajectories shows anomalous behaviour (eg global cooling instead of global warming), isn't that something that needs explanation and not just averaging away? After all, if a small perturbation of the boundary conditions can change the model's predictions of something as fundamental as global cooling or warming then that suggests that climate is indeed unpredictable (not just weather).

    Then there's the physical constants of the model. Presumably they are only known to a certain accuracy. How do the model's predictions vary as the physical constants are varied? Again, even if a single, apparently reasonable setting of the physical constants generates a trajectory with opposite behaviour to what one would expect, isn't that something that requires explanation, and not just averaging away?

    Finally, there's the "unknown unknowns" (to borrow vernacular from Donald Rumsfeld). The things that we don't even know need to be modelled. Like the recent discovery of tree respiration weirdness. What's the chance that an unknown unknown would make the models predict the opposite behaviour than what they do presently, even if it is just for a single trajectory?

     
  • At March 20, 2006 2:36 PM, Anonymous Anonymous said…

    Let's be clear what it is we are talking about. Basically we are discussing the climate sensitivity to increased CO2 (or increased radiative forcing in general). All models show this is positive - and that is independent of the model formulation or the individual trajectory. It is also shown to be positive from paleo-climate analysis, observations of the 20th Century change, the response to volcanoes - all independently of the models. Theoretically, it is obvious why blocking the escape to space of long-wave radiation must warm the surface, in the same way that damning a stream must lead to an upstream increase in stream level. The only question is how large an effect it will be. Models (of all different sorts) say 2.1 to 4.4 deg C for a doubling of CO2, observations say 1.9 to 4.9 with a best guess of around 3 deg C. (see this recent paper: http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf)

    This sensitivity is what you get for a specific radiative forcing change and it doesn't much matter how that forcing happened. So uncertinaties in the methane or carbon budget don't come in to the calculation at all. There may be unknown unknowns, but as Jule Charney said way back in 1979 - "We have tried but have been unable to find any overlooked or underestimated physical effects" that could reduce the warming. Nothing so far has changed that conclusion.

     
  • At March 20, 2006 7:36 PM, Anonymous Anonymous said…

    If I understand you correctly, you are saying that all trajectories show global warming with increased CO2, regardless of boundary conditions and regardless of physical constant settings (provided the boundary conditions and constants are reasonable of course).

    That doesn't seem consistent with the plots in Figure 1 of this paper

     
  • At March 21, 2006 8:38 AM, Anonymous Anonymous said…

    While climateprediction.net has many virtues, the screening procedure for which ensemble members were considered suitable in that first paper was not as rigorous as it could be. One issue was that many of the spin-up ensemble were not in radiative balance at the start of the 2xCO2 run. This was compensated for using an unphysical ocean heat sink/source. In areas where the ocean is a cooling factor (such as the upwelling region of the Eastern Pacific) an extra 'unphysical' cooling can lead to very bizarre results (ice formation at the equator for instance). That is what happened in those cases you highlight. It is a function purely of the design of their experiment, and is not a function of the underlying physics. (see their website for more details on this issue).

    Let me state my essential point more clearly - in NO model of the atmosphere that passes basic screening for a reasonable climate and is in equilibirum with the atmospheric composition and sea surface temperature will the sensitivity to 2xCO2 be negative.

     
  • At March 21, 2006 2:49 PM, Anonymous Anonymous said…

    You're telling me that cooling trajectories in a peer-reviewed (nature, no less) paper are wrong for technical reasons that were clearly not picked up by the reviewers in the first place.

    It should be obvious why some people are sceptical. I am an intelligent outsider (mathematician and statistician), and I understand the complxity of modeling although not the specific details of individual climate models. I know that if just one trajectory is cooling there needs to be an explanation, not just averaged away in an ensemble.

    If I can't even trust recent Nature papers but have to come to "Gavin" to be set straight, then I am deeply suspicious of the whole endeavour. Smacks of the high preisthood, not science: "we know how to set up the models properly, the rest of you should not worry about that. We'll tell you when the results are right or wrong."

    For all I know, the real criteria used in practice for pruning trajectories is that they show cooling.

     
  • At March 21, 2006 7:22 PM, Blogger coby said…

    Just FYI, this is the Gavin who is trying to help you.
    http://www.giss.nasa.gov/~gavin/

    This conversation is over my head, but I can recognize strawmen and childish taunts when I see them. You have rewritten, not rephrased in your first paragraph what Gavin just told you, and your "high preisthood" crack is unoriginal and vapid.

     
  • At March 21, 2006 10:56 PM, Anonymous Anonymous said…

    To anonymous: actually, you should not trust anyone. The "peer review" is just that, a review of peers who likely are from the same "Linear Amplification" camp. First time I saw this link, I said "wow! finally somebody has a realistic approach". Then, I found a couple of problem with the cited paper.

    First, the same question: why would anyone assume that the right CGM model must have a fixed point attractor, when all components of the system operate way above their instability limits? Just looking at historcal data, it makes me wonder... Sure, even turbulence can be averaged, but then there is another level - re-emergent structures, "coherent structures", Jet Streams, etc. Why would someone seriously assume that this hierarchical process would stop for the climate?

    Second, why they average an ensemble of initial trajectories, if different trajectories would end up at different areas of the system attractor? The system clearly demonstrates that it has (at least local) hyperbolicity, why not to employ known methods of estimation of Lyapunov exponents?

    Third, why would anyone average different system, ("grand" ensemble), without showing first that the invariant measures produced by each member have at least some resemblance, and do not have completely different topology? It could be like averaging Lorenz attractor with Roessler attractor...

    Fourth, why they stopped calculations at 16 years time point, if even Gavin requires at least 30 years for "climate" to be defined? How those obviously divergent trajectories are evolving beyond the cut point, what is the characteristic return time in their model, if any? Why they don't construct classic return ("Poincare") maps, when the system returns to the vicinity of original perturbation volume?

    Fifth, the article reveals: "Finally, runs that show a drift in Tg greater than 0.02 Kyr2^-1 in the last eight years of the control are judged to be unstable and are also removed from this analysis." What if Edward Lorenz would do the same, remove trajectories that would deviate too much from his expectations... I am even not talking about how it comes that 1.6% of trajectories ended by a factor of 100 to 100,000,000 of the attractor scale. What kind of numerical stability control was there?

    Sixth, how do they select the variety of initial conditions? How do they know that their selection is representative of the topology of the surrounding phase space, and not a subset that belongs to some degenerate submanifold? What is the characterisitic dimensionality of the system, and how the number of "ensemble members" compares to it?

    Seventh, what is the point of mapping the whole variety of end states (that are obviously on some sizeable vector field) onto a scalar function, "global temperature"? It achieves only some political goal, but does not add much to scientific understanding of the nature of climate.

    Questions, questions... I wonder, did their peers ask these questions, and what the answers were. So, you need to be careful with your "anonymous" trust. Don't get me wrong, this article is an outstanding effort. Unfortunately, it illustrates my initial assessment.

     
  • At March 22, 2006 9:37 AM, Anonymous Anonymous said…

    Alexi, I recommend going to the CPDN website for a lot of the answers to your comments, but I will try and clarify some of them.

    First, let's define some terms. Climate models are designed to calculate the changes of state (temperature, pressure, etc - denoted by X) as a function of the current state, the (uncertain) physical parameterisations (denoted by a vector of parameters x_i) and (known) external boundary conditions (y_i - including the level of CO2). You can therefore think of the model as a mapping from a physical state to another physical state L(X | x_i | y_i ) --> X.

    For each of these models, there is a sensitivity to initial conditions X_0 (i.e. the mapping is chaotic), for which Lyapunov exponents etc. can be defined. However, it has been shown time and again (in this class of model) that the long-time mean of X is NOT dependent on the initial conditions (and this is true of all statistics that are a function of X given a long enough time averaging). i.e. all averaging functions f depend only on L(x_i, y_i) and not on X_0. i.e. although at any one point f(X) = f(X0, x_i, y_i), in the limit as time-> infty, lim(f(X)) = f(x_i,y_i). Let's define F then as a mapping from x_i to f, i.e. F(x_i | y_i) --> f. The global mean surface temperature is simply one such mapping, (and the climate sensitivity the difference between one particular function for different values of y_i), but any number of other statistics would be equally valid and show similar behaviour.

    I should stress that this is a result, not an assumption. People have indeed looked for chaotic long term mean behaviour, or for the presence of mutliple steady states, and although these have been found in simpler models (and (conceivably) may be found in as yet undeveloped more complex Earth System Models), no-one has shown this to be true for atmosphere-only GCMs. Given that people have been looking for 30 years, this is unlikely to change any time soon.

    In the CPDN experiment, the idea is to look at the statistics of F, not L. There is no theory that states that F is chaotic because L(X)-> X is. In fact, it is demonstratable that the F(x_i + delta) - F(x_i) is bounded in most cases. (i.e. if I make a small perturbation to x_i, the differences in F are also small). Without this being true, no climate modelling would even be possible since every minor structural change in the model would produce wildly divergent results.

    However, while F is not chaotic, it is important to see what the dependence is on reasonable choices of x_i and this is what is attempted (pretty much for the first time) at CPDN, and this is what makes the study worthy of a Nature paper.

    Now going to your specific questions.

    1) No-one assumed GCMs have a fixed-point attractor - they have observed it.
    2) Because as explained above, averaging over different X_O gives robust statistics that do not depend on X_0. i.e. the pdf of all Lorenz attractor trajectories is stable.
    3) Because the grand ensemble of F over parameter space is not chaotic.
    4) These were just the calculations they had at that point. Extending the calculations in time has been done but does not affect the results much.
    5) If the instabilities were related to relevant physics then they would be the focus of the papers. But AFAIK all unstable runs are related to the unphysical issues mentioned above (and also discussed here: http://www.climateprediction.net/board/viewtopic.php?t=2106 )
    6) This is not an initial condition ensemble. If you are asking how they selected the parameter sets x_i, I think it is based on a latin hypercube sampling of the plausible space.
    7) any scalar or vector function would do, but since the value of the global climate sensitivity is of some societal relevance, that is a good one to start with. See here: http://www.realclimate.org/index.php?p=240#ClimateSensitivity
    8) The reviewers of the paper presumably thought (as I do) that this was a neat and unique experiment which had interesting preliminary results. The fact that we are still discussing it bears out that contention. See here for my thoughts at the time:
    http://www.realclimate.org/index.php?p=115

     
  • At March 22, 2006 8:09 PM, Anonymous Anonymous said…

    Just FYI, this is the Gavin who is trying to help you.
    http://www.giss.nasa.gov/~gavin/


    Ok, good - he certainly should know what he is talking abou then.

    This conversation is over my head, but I can recognize strawmen and childish taunts when I see them.

    Over your head you may be, Coby, but my point is not childish and it is not a strawman. And if I misrepresented Gavin's position, then I hope he corrects me, because my only intention here is to get to the bottom of this.

    As I understand him, (and please correct me if I am wrong Gavin), Gavin is claiming that no suitably calibrated climate model will ever show a negative temperature sensitivity to CO2, despite the results reported in the Nature paper.

    In fact, Gavin makes a further claim in his response to Alexi, namely that the long-term average of the global temperature (f) is a continuous function of the physical parameters x_i. This is essentially a statement that the climate models are ergodic. Do you have a reference for this Gavin? Do you also have a reference for the claim that no properly configured model can show negative sensitivity to CO2?

    I ask, because establishing ergodicity and mixing times of physical models is notoriously difficult. Even for a model as simple as the Ising model, numerical simulations using MCMC methods for many years yielded incorrect results because the simulations actually had a much longer mixing time that was assumed.

    ...and your "high preisthood" crack is unoriginal and vapid.

    Maybe so. But if negative trajectories are so obviously impossible (as Gavin implies), how did they make it into a Nature paper? Science has to be objectively reproducible. What are the objective criteria for ruling out anomalous trajectories?

     
  • At March 22, 2006 11:00 PM, Anonymous Anonymous said…

    Thanks Gavin for explanations, now I can see the light. In fact, I just started reading the report
    http://pubs.giss.nasa.gov/docs/2006/2006_Schmidt_etal_1.pdf
    and was pretty impressed with the amount of details and model levels, and how many men-years went into conceiving the project of this scale. However, I am still fuzzy about fundamental logic behind this kind of undertakings.

    Let me map what you said above on an extremely simplified example, for clarity. Let's consider a sound amplification system with a mike, as in one of examples on realclimate.org. If you get too close to a loudspeaker, or an operator turned amplification too much, you get a self-sustaining high-pitched tone, ear-shattering feedback. Then, you put a palm on the mike, it filters some high frequencies, so the tone of the loud sound shifts down, and gets severely distorted. You step in wrong direction, close to the speaker, the speaker's voice coil goes in smoke. Then silence. This is the system, a mike, an amplifier (with a parameter), and a speaker, having three distinct regimes.

    Now, following the above reseach strategy, I will consider a map of the speaker cone displacement into itself after 30 milliseconds. According to the approach, I will be interested in statistics of the cone displacement, starting with its mean. The result of observations is that the mean is about zero when the speaker was in a high pitch (and small displacement) mode. When the sound got into lower tone, the nice sine-wave turned into ugly nonlinear distortions, yet the mean of the displacement is still about zero. After the voice coil smokes out, the mean displacement is also zero. So, what kind of conclusion can be drawn from the measuments of the zero-order statictical momentum of the system? What did we learn? That all three regimes have the same global temperature and are no fundamentally different? (Actually, if the system played a music before our experiment, the mean was also zero). I sincerely do not see any value in the approach you are following.

    What was alarming for me in your explanations is that you consider the CO2 concentration as a part of "boundary conditions", which are commonly considered as fixed, given. Therefore, I now see that we are talking about different things. You are talking about relatively short-scaled events like Pinatubo eruption or other episodic gas emissions, and expect the system to approach the "state of equillibrium". In fact, you are looking for this state to "calibrate" the system. Obviously, effects like abrupt glacial terminations are outside the scope of this approach. Obviously, if a CO2 concentration is a given, there could be no talk about changes, or what causes it in longer historical perspective.

    In contrast, I was talking about long-term intertwinned dynamics of ice caps, changes in ocean streams and heat content (leading to outgasing), evolution of biomass, etc., which presumably results in large quasi-periodic swings of ice ages. Shall we improve on terminology, and define some "grand global climate dynamics" as opposed to ordinary definition of climate as a 30-yrs averaged weather?

    Going to my specific questions:
    1) Confusion here: you said L(X,...) -> X is chaotic, now you say it is a fixed point attractor. Which one is true?
    2) As we see from the example above, mean statistics conceals fundamental detals of system dynamics.
    3) As we see again, topologically-different systems may have the same F; averaging different systems still makes no sense.
    4) System of such high complexity may have extremely long "warm-up" time, especially if initial conditions X@t=0 are not on the attractor (known attractors usually have fractal-like basins of attraction and a web of heteroclinic orbits, loosely speaking); it may take milleniums to get on the attractor.
    5) that explanation of "cold equators" shows that the model has known escape routes, so the arbitrary-choosen initial conditions of finite measure may blast the system away, so the model has an apparent fundamental problem.
    6) See (4) and (5) for what I was suspicious of.
    7) Just I as said, political goal conceals fundamental scientific details.
    8) I would agree with reviewer's motivations; however, the fundamental methodological flaws make the paper less neat.

     
  • At March 23, 2006 10:23 AM, Anonymous Anonymous said…

    1) my mistake. GCMs are chaotic, they do not have fixed point attractors.
    2) in your example, it is only the mean that is constant. In GCMs you can pick any statistic you like. You could pick something obscure (like the standard deviation of the evaporation at a location in the Indian Ocean) and see the exact same behaviour. But very few people are particularly interested in that statistic. The global cliamte sensitivity is of much greater interest. If you define working on problems that people care about as 'political', then this would be, but this is a very odd choice of word to me.
    3) There is no evidence that there are fundementally different states/attractors anywhere near the current climate system. Should these be seen in current climate models it would be a big deal. The fact that the climate models also do a resaonable job at the LGM indicates that even that cliamte then was probably not a fundamentally different system (but I wouldn't rule it out completely).
    4) This is possible. But again there is no evidence of fractal boundaries - possibly this is due to the dissipative nature of the problem, but it is empirically observed that this is not the case.
    5) The real climate system has many additional feedbacks that the set-up used for the CPDN experiments (since there isn't a full ocean) does not. But again I stress that this is a feature of this set-up, not of climate models in general.
    7) No paper is perfect, and assuming that everything written in a Nature paper is correct or better validated than in any other paper is probably foolish.

    One final point. What is a boundary condition, and what is part of the solution is purely dependent on the extent and physics of the model you are discussing. The CPDN model does not include a carbon cycle and so CO2 is a boundary condition (possibly time-dependent). As you correctly discern, in this configuration it cannot be used to do glacial to intergalcial cycles. It is suitable for doing 20th Century transient runs though since I know what CO2 has been over that period. There is no 'equilibirum' reached over that time period, and so initial condition ensembles are used to average over the individual realisations of the 'weather'. Other model configurations (which have atmospheric chemistry modules, carbon cycles, aerosols etc.) will be used for questions that involve those kinds of feedbacks. These models have not yet evolved to the point where they can do the 'grand global climate dynamics' on a general scale.

    PS. To Anonymous. What definition of ergodic are you referring to?

     
  • At March 23, 2006 4:18 PM, Anonymous Anonymous said…

    Gavin, I am using the usual definition of ergodic, eg this link

    One of the simplest illustrations of ergodicity is Markov chains. If the chain is ergodic the long-term distribution over states in the chain is independent of the initial state. If not, the long-term distribution can depend on the initial state.

    I guess some explanation of how I am mapping these concepts onto climate models is in order (please correct me if my mapping is non-standard or nonsensical).

    1) I assume the state in a climate model is given by the weather. That is, weather == state.

    2) The evolution of the state (weather) is governed by a (very complex) partial differential equation (pde) which is discretized in computer simulations giving the weather at time T + delta as a function of the weather at time T.

    3) The simulations are run for an ensemble of initial states (weather), which can be interpreted as selected from an initial distribution W_0 over all possible weather states.

    4) With this interpretation, the dynamical system of the climate model evolves the initial weather distribution W_0 forwards in time to give the distribution over weather states W_t at time t. (Of course W_t is not represented directly - it is represented by an ensemble of trajectories whose initial states were chosen according to W_0).

    With these interpretations, the statement that a climate model is ergodic is equivalent to saying that in the limit as t tends to infinity, the distribution W_t approaches a fixed distribution W, regardless of the initial distribution W_0. The rate at which W_t approaches W (using almost any suitable measure of the distance between distributions) is called the mixing time. Obviously, to avoid transient behaviour you need to run the system long enough for W_t to be close to W, so knowing the mixing time is important.

    [there may be an issue with this interpretation if the model is deterministic, but presumably it is ok to add a stochastic component to the model dynamics without hurting the conclusions]

    Now, obviously the long-term distribution W_t can't be completely independent of W_0. For example, you could initialize the system with (the very unphysical) weather having vertical wind-velocity everywhere in excess of the earth's escape velocity, and the entire atmosphere would blow off in the first few minutes.

    However, ruling out silly weather configurations, do you have a reference showing the climate models are ergodic in the sense above? And then the second part of your claim, that they are continuous in the physical parameters x_i? (this I interpret to mean the limit distribution W is a continuous function of x_i (again, in an appropriate topology on distributions, but an obvious one is the one generated by the metric used to measure convergence of W_t).

     
  • At March 24, 2006 6:39 PM, Anonymous Anonymous said…

    Thanks.

    The answer is almost certainly yes, these climate models are ergodic. There do not appear to be any sub-spaces that only map to themsleves. The demonstration of the von Neumann/Birkhoff result is that the long term mean or statistic of any particular function for one trajectory is equal to the mean at one time of all trajectories started with different initial conditions.

    I am unaware of any formal proof of this result, but empirically, the result is close enough to being true to make no practical difference. Of course, all you need to find to disprove it are multiple (statistical) steady states, but as I said these have not been reported for this class of model (though it may be in the future once larger numbers of bio-geochemical feedbacks are included).

    Similarly, the result that W is a continuous function of x_i is also empirical - but again, a disproof of this would be easy if it were not generally true.

     
  • At April 01, 2006 2:01 PM, Anonymous Anonymous said…

    The issue at fault with the 'climate models', apart from the obvious discrepancies of the included materials behavior to those actually presented, is further exasperated with the realisation of the 'double' inclusion of kinetic energy within the calculations'.

    It is easily seen in the slide at http://www.climateimc.org/?q=node/348 (*) with the displayed trend to Human Population that the rise in median temperature of the surface ocean waters run in a trend similar to that of the surface, but with a delay, and in a muted manner.

    The delay is due to the direct Conduction processes transferring the kinetic energy from the land to water surface within the actions of Convection. It would appear to have been ~15 years in 1930, for example.

    The dampening of the trend is due to the ability of the liquids of the ocean being able to produce Turbulence in reaction to these gains of kinetic energy. As such these processes of Turbulence lower the residual energy that is recordable as 'temperature'.

    So the water surface will show a lagging trend of lower and more moderate increases whilst the dry land surface continues to be rematerialed and present a generally rising median temperature from altering interactions with incident radiation produced by the altering of the materials OF the surface made within the sprawl of Human Habitat.

    The atmosphere being a gas is able to display much more readily the effects induced by Turbulence, hence the observed weather patterning alterations.

    The combined 'land/ocean' plot is however presenting a 'double count' of much of the kinetic energy.

    This is WHY the 'models' are NOT preemptive AND give scenarios of such 'alarming fantasies' as they include TOO MUCH energy. Certainly much MORE than is actually present.

    This is WHY there is still some attempting to platform 'greenhouse concepts', as the claim is made of a need to 'account' for the 'energy observed'. It is just that the energy is not actually present in the amounts 'infered'.

    Again I mention this to play the difference of OBSERVATION to INFERENCE.

    Too often, INFERENCE is given precedence over OBSERVATION in relation to 'greenhouse (and related) concepts'.

    Your's, Peter K. Anderson a.k.a. Hartlod(tm)
    From the PC of Peter K Anderson
    E-Mail: Hartlod@bigpond.com

     
  • At April 02, 2006 6:56 AM, Anonymous Anonymous said…

    Can anyone attempt to clear up something that's been bugging me (a relative layman) since Quirin Schiermeier recent news feature in Nature please?:

    From what I understand, the more complex models (such as those used by the IPCC - HADCM3 etc.) do not predict a great weakening or shutdown of the THC in the 21st century, but simpler models (Such as those used by Schlesinger et al., Challennor et al.and the ones you analysed by Rahmstorf) put the probability much higher.
    The article states that the complex GCM's cannot account for all the complexities and feedbacks involved (due to the unfeasable computational time it would take) and that for some purposes the intermediate models can capture things better. With this in mind I have been unable to resolve the dilema (despite a lot of reading around!):
    I cannot understand how we can be confident in the complex models' THC predictions if they cannot fully account for the processes involved?
    Are the intermediate models so inept that we should really disregard them - or do they have merit enough to challenge the THC findings of the complex models?
    Is middle ground between the two - should we be prehaps more cautious than the complex models suggest, if they can't fully evaluate such things yet?

    Thanks

     
  • At April 02, 2006 1:35 PM, Blogger coby said…

    For a good RealClimate discussion of the THC/Gulf Stream/European Ice Age issue, see here.

    My impressions are that it takes a complex model to try to predict that because it depends on polar amplification of GHG warming, the behaviour of the Greenland Ice Sheet, and some complicated interplay of ocean currents. The current understanding seems to be this is unlikely to happen in at least the next 100 years because it would take some really massive amounts of fresh water injected into the Arctic ocean waters and in just the right spots.

    I also think it is cautiously acknowledged that these kinds of "state changes" or "tipping points" are highly uncertain. The models are built with and tuned to the current climate and once out of the current climate in a significant way are less reliable. As you will see on Real Climate the climatologists think this won't happen any time soon. I personally, would not rule it out. I would be very surprised if we are not in for some big surprises and this may be one of them.

    FWIW, there was a recent paper on a successful hincasting on the THC shut down that occurred at the 8200 year event at the end of the Younger Dryas.

     
  • At April 04, 2006 4:06 PM, Anonymous Anonymous said…

    But...the hypothesized upper tropospheric leading has not been observed like it should have been. There could be all kinds of (good) reasons for this, but to say that's been confirmed is misleading, and doesn't help the discussion. Advocates of the AGW theory need to acknowledge this and come clean about it--and suggest factors that might help to explain it. Otherwise the debate will remain political, i.e. completely useless.

     
  • At April 04, 2006 4:58 PM, Blogger coby said…

    Can you elaborate? And I am not sure what you mean by "leading", I at least was only referring to warming.

     
  • At April 05, 2006 8:31 AM, Anonymous Anonymous said…

    I just meant that the upper troposphere is supposed to warm faster than the lower troposphere, and the evidence seems to be pointing in the other direction. I'm not saying there isn't a good explanation for this, I'm just saying that from I know, so far the models can't explain this.

     
  • At April 05, 2006 11:14 AM, Anonymous Anonymous said…

    Re: Upper troposphere/lower troposphere "discrepancy": CCSP product 1 addresses some of this issue, with the conclusion (to the best of my understanding) being that within the bounds of experimental error there is no significant discrepancy between models and observation (mostly because corrections to observations have been moving them in the direction of the models)

     
  • At April 05, 2006 11:15 AM, Blogger coby said…

    Oh, okay. Yes, I think that is correct with some caveats. First, there are several satellite analyses and they show slightly different trends. Second, I believe the model predictions and all satellite observations are within each others error bounds, so while the agreement is not perfect, it is not necessarily an indication that either is significantly wrong. Third, the satellite record is very short, I think we need a few more decades before a robust trend can be nailed down.

    And of course given the history, would more errors being uncovered be a surprise? It is an amazingly complicated analysis.

    Have a read at Real Climate for the messy details.

     
  • At April 05, 2006 7:53 PM, Anonymous Anonymous said…

    Yeah, I was wondering about the error bounds. Will have a look at that. I certainly agree that a few more decades would give us a better analysis--but isn't that a point in favor of so-called "skeptics"? I mean, if we still can't be sure that the model works yet...doesn't that leave legitimate room for doubt? I'm just asking.

     
  • At April 05, 2006 9:42 PM, Blogger coby said…

    Well, I think this depends very much on one's personal approach to risk. Do you wait til you are 100% sure that you are in grave danger before taking action? Or do you avoid something that has even a 20% chance of being very painful if not cripling? Or all kinds of degrees in between.

    Plus, what is it we need to be sure of? It is clear we are warming quickly, how much does it matter if the troposphere is or isn't warming faster? Of course, I think it would be a big deal for climate science and would demand new theory but we do live at the surface after all.

     
  • At October 23, 2006 3:54 AM, Anonymous Anonymous said…

    In response to Anonymous' "Yeah, I was wondering about the error bounds", coby writes:

    > Well, I think this depends very much
    > on one's personal approach to risk.

    One's preferred interpretations and one's debating PoV depends on that, yes. But Science does not depend on it.

    In Science, you determine error bounds for any uncertainties in your premises, you calculate how those uncertainties permeate through your analysis or computation, and you then present your predicted results along with their associated error bounds.

    That's Science.

    It doesn't matter how fantastic your modelling programs are or on which sexy supercomputers they run or how many high-profile researchers run them. If there is no corresponding uncertainty analysis and presentation, it's not Science, it's wishful thinking.

    The Science of climatology is very interesting, and important. The rest isn't.

     
  • At October 23, 2006 9:53 AM, Blogger coby said…

    Sure, I have no disagreement with your comment. My remark about about personal approach to risk was only meant to apply to what one does when faced with a given amount of uncertainty about likely danger.

     
  • At October 27, 2006 7:53 AM, Anonymous Anonymous said…

    Gavin (if Schmidt) should go and answer many questions asked to him in Climare Sceptics mail list. He seems to have been avoiding answering those questions (hard to answer, I know) on the basis of "rudeness" shown by Hans Jelbring.

    And Gavin (if Schmidt) has not yet answered to the paper by Mauas & Flamenco (2005) on "Solar Forcing of the Streamflow of a continental Scale South American River", showing a 99.99% confidence level on correlation between solar activity (solar cycles) and the hydrological cycles of the ParanĂ¡ river, the fifth in size in the world -which come to pair with other studies on the Lake Victoria, Africa and other parts of the world.

    BTW, it hasn't occurred you that Arrhenius' prediction has not any meaningful correlation with CO2 increase, and his "success" was just a 50/50 hit, due to Earth coming out of the Little Ice Age?

     
  • At October 27, 2006 10:06 AM, Blogger coby said…

    BTW, it hasn't occurred you that Arrhenius' prediction has not any meaningful correlation with CO2 increase, and his "success" was just a 50/50 hit, due to Earth coming out of the Little Ice Age?

    http://illconsidered.blogspot.com/2006/09/we-are-just-recovering-from-lia.html

     
  • At November 07, 2006 10:04 AM, Anonymous Anonymous said…


    How will people in 2100 judge our foresight when they will have the benefit of hindsight? Will history judge our leaders wise to have waited for proof of danger and agree that "they couldn't have known what was coming"? I don't think so.


    This is a kind of "When did you stop beating your wife" question, isn't it.

    After all, if the your arguments are so good, why do you need such an emotional conclusion?

     
  • At November 15, 2006 10:29 PM, Blogger coby said…

    In order to get accurate hindcasts of the past you need complete information. If that information is absent you should not expect any model to reproduce the past. I am not sure exactly what you are talking about, but it is a reasonable thing to do model experiments adding a volcanic event or gamma ray burst or whathaveyou as proposed explanations. This does not hae any bearing on the future or on hindcasts over the recent past where information is relatively complete.

     
  • At November 16, 2006 1:03 PM, Blogger coby said…

    At this point I would like to draw your attention to the difference between a projection and a prediction. No one, human genius or sophicticated computer model, can predict how human behaviour will change the atmosphere over the next 100 years. Therefore, no model run over the 21st century can be called a prediction. They are projections of what will happen IF emissions follow a given path and cover ranges of possible futures. Hindcasts are like running a model projection one hundred years ago given perfect knowledge of the "future". Because hindcasts of the 20th century perform very well, we should have confidence in projections, but still can not call them predictions.

    I don't understand how feeding required data to a model can be characterized as "spoon feeding".

     
  • At November 16, 2006 11:03 PM, Blogger coby said…

    A climate model is a mathematical construction that takes data about all the various significant factors that influence climate and outputs a representation of what the climate should do in those circumstances. If you do not give the model accurate and complete data about the forcings over the timeframe you are examining it can not give you a reliable output.

    Weather forecasting is an initial conditions problem, you program in the starting parameters and then run it forward to see what will happen. Climate models are not like that, initial conditions do not matter. What matters is the progression of the various forcings over time. "adding data from later events" is giving the model the correct information and thus getting a more reliable output. If the output does not match the observations, you know that something is wrong with your model. If the output conforms with the observations you know your model works for the conditions you have tested it with. The more varied the test conditions you can have the better the model has to be to pass them all.

     
  • At November 18, 2006 5:38 PM, Blogger coby said…

    No problem, your comments, questions, objections, flames ;) are welcome here!

     
  • At January 09, 2007 9:45 PM, Anonymous Anonymous said…

    I've seen a ton of "hindcasting and holdout / validation set analyses of model performance, but are there any actual examples of validation of a GCM prediction made on date X for some future date along with subsequent analysis of predictive accuracy at the date of the forecast?

    I think it's pretty easy to get around the whole 'projection vs. prediction' issue. I assume that to validate the predictions of a model, you would want to escrow a version of the model along with input assumptions and operational scripts at the time of the prediction, and then at the time of validation populate it with the actual values of all inputs, re-run the model and compare the prediction to actual. I don't think it really makes any sense to say "I predicted that the temperature would be X and it is Y", since embedded in my forecast of X were tons of updatable inputs, and the purpose of the validation exercise is to validate my relationship between GHGs and temperature (or whatever). Obviously here I mean 'validate' in sense of failure to falsify.

    I have been sent to the 1988 Hansen predictions, which in his 2005 PNAS paper he agrees can not yet be validated.

    Thanks in advance,
    Jim

     
  • At January 12, 2007 4:59 PM, Anonymous Anonymous said…

    >" Weather forecasting is an initial conditions problem, you program in the starting parameters and then run it forward to see what will happen. Climate models are not like that, initial conditions do not matter."

    Reading this literally, surely this can't be true. I think you mean to say, "minor details in the initial conditions do not matter". Imagine starting a 100 climate model much like any other, except with a starting value of ten times the C02 in the atmosphere than there actually was in the year 1900. That should throw off the result by quite a bit.

    As I understand it, climate models start with certain conditions and are allowed to run their course unaltered. Data on the environment isn't updated mid-simulation to reflect the historical records. Am I right?

     
  • At January 12, 2007 6:29 PM, Blogger coby said…

    Hi Nickolas,

    I think you have the right idea, we are all being sloppy with defining our terms here! I believe one thing that is typically done with a climate model is to set all the relevant conditions, ie GHG levels, sea ice extent, aerosols etc, and run the model until it reaches an equilibrium state as a test of its viability. It is then run forward with the changing parameters, using historical data for hindcasting and projections of pollution levels for the forecasting. They also do ensemble runs, rerunning the same scenario many times and then taking the average trajectory. This way "natural" variability is removed (scare quotes because I mean the simulated natural variability)

     
  • At January 12, 2007 6:30 PM, Blogger coby said…

    Jim,

    I am trying to get Gavin from RC to answer your question as I think it requires a more technical answer than I can give you.

    Stay tuned.

     
  • At January 15, 2007 4:44 PM, Blogger coby said…

    Here is a reply to Jim's query a few comments up-thread that I received from Gavin Schmidt via email:

    QUOTE===========
    The first point I would make is technical. Climate change is a research topic, not an operational activity. The difference is that there is no 'National Climate Service' whose job it is to continually make forecasts and re-analyse their forecasts on a regular basis. This does happen for the seasonal prediction crowd (see the IRI website or PMEL ENSO forecasts for instance), but those endeavours are much more amenable to that kind of project (i.e. the forecasts can be compared to actuality often). But for the multi-decadal projections, there is no such facility - and it's kind of obvious why - by the time you validated something you would be much better off with the tools that had been developed since you made the first forecast (just go with the growth of computer power). It is therefore unlikely that any such formalised program will ever be set up (at least in such a simplistic way).

    The second point is that we are dealing with a forced response, not an initial value problem. Pulling out a clear forced response in the presence of weather noise takes time (or a really large forcing). Given the current growth of radiative forcing ~0.04 W/m2/year, the trend in global mean temperatures of ~0.2 deg C/dec and 'noise' in the global mean temperatures of 0.1 to 0.2 degrees, you need approx 2*sigma change to detect a trend, and therefore ten to twenty years simply to see if a trend emerges from the noise, and of course, much longer to have a good constraint on the magnitude of the trend.

    Given these two points, the best that can be hoped for is that projections made in the literature using assumptions that panned out (i.e. that were done with reasonable scenarios) will be compared with observations over the decades that passed since. The first of those 'realisitic' projections were done in the mid 1980's and so are now starting to be usefully compared. Hence the interest in Hansen et al 1989.

    I enclose a preliminary figure of the radiative forcing from the three H89 scenarios along with the forcings we are using today for the period over which H89 had to make a guess (1984-2005). As you can see, scenario B was right on the money with respect to the net forcing (I took out the volcanic part, and the three 'obs' lines are for just the well-mixed GHGs, everything except volc and solar, and everything except solar). However, the temperature trends over this period (deg/dec: A: 0.41+/-0.05, B: 0.23+/-0.06, C: 0.23+/-0.05, Met-Index: 0.23+/-0.04,LO-index: 0.20+/-0.03) can't be distinguished from each other in a significant way (Scenario A excepted). So even now we still can't formally validate a 1988 projection.

    You can do better with short term large perturbations - such as for Pinatubo, and indeed, the GISS team made forecasts of the effects on global mean temperature ahead of time that worked out very well (a maximum cooling of 0.5 deg C).
    http://pubs.giss.nasa.gov/abstracts/1992/Hansen_etal.html

    So I don't really know what to add. If you want climate forecasts validated like weather forecasts you will need to wait for a couple of decades. If you prefer instead to use the climate models as one additional piece of evidence that climate change is happening for the reasons we think it is, then you are already done.

    Gavin

    ==========END QUOTE

     
  • At January 17, 2007 6:45 AM, Anonymous Anonymous said…

    Coby / Gavin:

    Thanks so much for taking the time to respond.

    I agree exactly with the point that the 1988 Hansen forecast can not yet be validated (as does Hansen). This is pretty intutive when you recognize that the projected temperature increase for 1988 - 2005 for Scenario C (basically, radical reduction in carbon emissions)is GREATER than than that projected for Scenario B (basically, business as usual). That is, we are still in a period of statistical noise.

    I think there is a lot of debate about the implications of the Mt. Pinatubo eruption "natural experiment" for estimating real-world tmeperature sensitivity to GHG concentrations, see for example Douglass and Knox's well-known recent paper in GRL:

    Douglass, D.H., and R. S. Knox, 2005. Climate forcing by volcanic eruption of Mount Pinatubo. Geophysical Research Letters, 32, doi:10.1029/2004GL022119.

    I get the point that validation of GCM predictions, given the assumption that they are are finding trends that require at least 25 years or more to measure means that lack of validation is inherent in the problem (since we would care about changing behavior well before that if we believe in catastrophic warming). But, this does mean that one of the two standard questions sceintists ask about a simulation model in any field - how accurately does it predict the outcome of interest when presented with complete input data? - can not be realistically answered.

    The other key question is normally "Do the embedded quantitative relationships embedded in the sumulation represent a reasonably complete set of known physical laws"? The answer to this question is at best "partially" for GCMs. Today, scientists: (1) have not agreed on a complete list of feedback effects, (2) have not modeled the quantitative physics for some of the known feedback effects that are believed to be significant, and (3) have not modeled the quantitative physics for many of the hypothesized interactions between feedback effects. Note that massively net positive feedback is essential to achieve the UN IPCC level warming predictions, so it is central to the debate and not a marginal issue.

    So, given that I know that the GCM models are based on a radically incomplete set of relationships for central feedback effects and I can't validate them through prediction, the key question becomes: why should I rely on their predictions?

    Sorry for the long post, and thanks again for your time and attention.

    Jim

     
  • At February 20, 2007 5:04 PM, Anonymous Anonymous said…

    Objection:
    Why should we trust a bunch of contrived computer models that haven't ever made a confirmed prediction?

    ________________________

    Unless you have proof that any computer models have ever been succesful in predicting, then shut up neo-Communist propagandist.

     
  • At November 26, 2007 7:18 PM, Anonymous Anonymous said…

    Give me a break! Comparing a guy's predictions from 1890 to the predictions of today is absurd! If anything, you should look at the fact that he UNDERESTIMATED the effect as a hint that his proposition was incorrect.

    You then point to our good friend, Hansen (of global cooling fame, IIRC) and say that his 1988 "predictions were validated." The only thing that was "validated" is that it is warmer. The cause of the warming, however, has NOT been verified. It is entirely possible that he was just plain lucky.

    And then, most importantly, you leave out solar cycles. You know what looking at data concerning global temperatures and solar cycles tells us? A predicted period of high temperatures right abooooooouuuuuuut... now. Well, actually a period of rising temperatures around the start of the century, but you get the point. Thus, it raises the question of "was Hansen just a lucky guesser?"

    Ultimately, the point is this: your claim that the current models are validated by previous "correct predictions" is phony and fallacious. The two matters are entirely separate, and any scientist worth his salt (or a reasonably logically minded person for that matter) would quickly put you in your place.

     

Post a Comment

<< Home