Watt about deterministic chaos?

There’s a recent paper in the Journal of the American Meteorological Society called An Evaluation of the Software System Dependency of a Global Atmospheric Model. The paper considers global atmospheric numerical model on machines with different hardware and software systems that were tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. They find that There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems.

Now, as far as I can tell, this is probably – in some sense – simply an illustration of deterministic chaos. Rather predictably Watts Up With That (WUWT) already has a post called another uncertainty for climate models – different results on different computers using the same code. The post actually says

It makes you wonder if some of the catastrophic future projection are simply due to a rounding error.

I think this indicates a mis-understanding of deterministic chaos and what this paper is actually illustrating. I’ve done some work on deterministic chaos, so thought I would briefly discuss my understanding of what is meant by this term, although – bear in mind – it has been a long time since I’ve looked at this in any detail.

It’s my impression that many think that chaos means random or means that we don’t really understand the underlying physics or mathematics. In fact, deterministic chaos is a well-defined mathematical concept. It refers to a situation in which you can completely understand and be able to model (mathematically) some system, but cannot actually determine the initial conditions with sufficient precision to be sure that any simulation of the system’s evolution will actually match what is observed.

Essentially, it is a system that is extremely sensitive to it’s initial conditions. There are, consequently, two related issues. One is that if you try to simulate (with a computer programme for example) such a system, you cannot possibly determine the initial conditions with sufficient precision to know if your simulation results will actually match what you would observe. A classic example of a chaotic system is a double pendulum, whose motion is chaotic for certain energies. It seems nice and simple but even though we can write down the exact equations describing the system, we would be unable to accurately model (for a given initial position) it’s evolution because we couldn’t possibly measure the initial state with sufficient precision.

The other issue is that because such systems are so sensitive to initial conditions, even if you do know how it behaves given one set of initial conditions, you can’t use that to estimate how it would behave if you changed the initial conditions very slightly. Hence the the term the butterfly effect. There are numerous variants of this effect, but one example might be the flap of a butterfly’s wings in China can set off a storm in Arizona. This doesn’t really mean that a butterfly flapping it’s wings will actually do this, simply that the weather system is so sensitive to initial conditions that a small change in the system can lead to a large effect at a later time.

So, really this paper is – in some sense – simply illustrating that climate models are inherently chaotic and hence even a small difference between how different architectures, or compilers, treat numbers can change the result of a simulation. Not that surprising really. That’s not to say that this isn’t an important thing to understand, simply that it doesn’t really indicate an error in the models.

Advertisements
This entry was posted in Anthony Watts, Climate change, Global warming, Watts Up With That and tagged , , , , , , , . Bookmark the permalink.

8 Responses to Watt about deterministic chaos?

  1. Latimer Alder says:

    ‘it doesn’t really indicate an error in the models.’

    H’mmm

    Doesn’t do much to add to anyone’s confidence that they are worth anything though, does it?

    The list of things they are not good at gets longer by the minute:.

    Predicting things
    Being consistent
    Getting better with more development

    Persuade me, please that they are not just vast sinks of blood and treasure and will eventually come up with something that is practically useful for the ‘climate change problem’. Because after 30 years, their credibility is fraying around the edges.

  2. Well, I disagree (of course, I guess). They’re fundamentally chaotic. There’s nothing that can be done about this. You cannot develop a global climate model that isn’t chaotic. Therefore, you cannot possibly set the initial conditions so precisely that the model (as perfect as it could be) could possibly correctly predict the future evolution of the system. Therefore you need to use ensembles and statistics to estimate how the system will evolve in future.

    There is something that I haven’t addressed but will briefly mention here. Much is made of the mismatch between the model predictions for the last decade and the observations. However, one can make a perfectly reasonable argument as to why this is not surprising. Firstly ENSO events are inherently unpredictable and so even though models include them, the precise timing of the events differ from model run to model run. Secondly, as this post addresses, the model are inherently chaotic. Both of these suggest that to gain understanding requires using enembles of model runs, rather than focusing on individual runs (that will suffer from both an uncertainty in the timing of ENSO events and suffer due to deterministic chaos).

    Here’s when you may have to suspend your beliefs 🙂 Consider the possibility that in reality what we’re undergoing is a long-term warming trend, on top of which are superimposed natural variations due to (for example) ENSO events. Given that the variability in the model runs will differ (partly because of the random nature of the ENSO event timing and chaos) the tendency will be for the ensembles to average out the natural variability, and hence produce results that emphasise the underlying warming trend. Maybe this isn’t what’s happening, but presumably you could at least consider that this could explain the current supposed mismatch between the models and the observations (I say supposed because they’re still consistent at the 5 – 10% level).

  3. Latimer Alder, on Google I have found over 6 thousand pages with your name and climate or global warming. You seem to find climate change a reasonably important topic.

    May I sincerely ask you, why in the last eight years (the first comment I found was from 2005) you never went to the trouble of following a first-year course in atmospheric science or reading an introduction to meteorology? There you would have found information on chaos theory. It would have equipped you to discuss climate matters with much more authority. Wouldn’t that have been worthwhile?

    It is a sincere question, not just a rhetoric question. I just do not understand this. Is it important to you to have a well founded opinion? Do you just like discussions?

  4. BBD says:

    Because after 30 years, their credibility is fraying around the edges.

    Argument by completely unsupported assertion. AKA “rubbish”.

  5. chris says:

    Interesting – I think one can highlight the fallacy of the WUWT misconception (and that of Latimer above) about this paper more simply.

    Simply put the observation of platform-dependence of the model described in the Hong et al manuscript doesn’t matter very much, and isn’t really unexpected. One can conclude that it doesn’t matter very much because the data presented in Hong et al show that it doesn’t matter very much. On the simulations most relevant to climate models (their seasonal simulation of boreal summer in 1996 and comparison with the real climatology) the results of the ensemble arising from varying initial conditions (initial conditions ensemble) and the ensemble arising from platform/software variation (software system ensemble) are pretty much equivalent. I’ve pasted a data description from the paper at the bottom of the post that shows this [*]. Both ensembles give the same qualitative result (tropical rainfall pattern comparable to observations) and similar quantitative result (see below [*]). In fact a significant conclusion of the paper is that the platform/software-dependence of model run variability is acceptable for the simulations studied.

    So what does this all mean? It means that in any computer simulation of the time dependence of computed system observables (e.g. regional and global temperature, rainfall etc.) using a set of equations describing the temporal evolution of parameters and their interactions according to specified forces (e.g. the energy within the system and/or a greenhouse forcing), the particular trajectory of the simulation depends on initial conditions. One might even run simulations with essentially identical initial conditions but send individual trajectories down unique pathways by coupling one or more variables to a random number seed that differs between different runs.

    The aim is to generate an ensemble of temporal trajectories (e.g. the time evolution of surface temperature in a climate simulation) that describes an “envelope” of expected behaviour that is not biased by the vagaries inherent in a single run of the simulation. Both weather forecasts and climate simulations are nowadays assessed in ensemble runs in which multiple simulations provide an assessment of the range of expected behaviours inherent in the system and its parameterization.

    Simply put, Hong et al demonstrate that the platform/system-dependence of the simulation is not very problematic with respect to generating ensemble forecasts/hindcasts since the nature of the “envelopes” of trajectories is broadly independent of the platform/software, even if the results of single simulation runs with identical parametrization and initialization are platform/software-dependent….

    ————————-

    [*]

    As shown in Fig. 3, the both ensemble results produce a tropical rainfall pattern comparable to the observation, with the main rain-belt along the intertropical convergence zone (ITCZ) (cf. Figs. 3a,b,and c). It confirms the resemblance that three-month averaged daily precipitations simulated from the 10-member initial condition ensemble runs are within a narrow range between 3.387 mm d−1 and 3.395 mm d−1.Those from the 10- member software system ensemble runs are within a similar range between 3.388 mm d−1 and 3.397 mm d−1.

  6. Marco says:

    Note that the Stoat also had something to say (and he’s had quite some experience with climate modeling):
    http://scienceblogs.com/stoat/2013/07/27/oh-dear-oh-dear-oh-dear-chaos-weather-and-climate-confuses-denialists/

  7. Very interesting. Thanks.

  8. Pingback: Watt about falsifiability? | Wotts Up With That Blog

Comments are closed.