Climate Models Can Never Work, Says Computer Modeller

cancel2 2022

Canceled
.

If you cannot make a model to predict the outcome of the next draw from a lottery ball machine, you are unable to make a model to predict the future of the climate, suggests former computer modeller Greg Chapman, in a recent essay in Quadrant. Chapman holds a PhD in physics and notes that the climate system is chaotic, which means “any model will be a poor predictor of the future”. A lottery ball machine, he observes, “is a comparatively much simpler and smaller interacting system”.

Most climate models run hot, a polite term for endless failed predictions of runaway global warming. If this was a “real scientific process’” argues Chapman, the hottest two thirds of the models would be rejected by the International Panel for Climate Change (IPCC). If that happened, he continues, there would be outrage amongst the climate scientists community, especially from the rejected teams, “due to their subsequent loss of funding”. More importantly, he added, “the so-called 97% consensus would instantly evaporate”. Once the hottest models were rejected, the temperature rise to 2100 would be 1.5°C since pre-industrial times, mostly due to natural warming. “There would be no panic, and the gravy train would end,” he said

As COP27 enters its second week, the Roger Hallam-grade hysteria – the intelligence-insulting ‘highway to hell’ narrative – continues to be ramped up. Invariably behind all of these claims is a climate model or a corrupt, adjusted surface temperature database. In a recent essay also published in Quadrant, the geologist Professor Ian Plimer notes that COP27 is “the biggest public policy disaster in a lifetime”. In a blistering attack on climate extremism, he writes:

We are reaping the rewards of 50 years of dumbing down education, politicised poor science, a green public service, tampering with the primary temperature data record and the dismissal of common sense as extreme right-wing politics. There has been a deliberate attempt to frighten poorly-educated young people about a hypothetical climate emergency by the mainstream media, uncritically acting as stenographers for green activists.

In his detailed essay, Chapman explains that all the forecasts of global warming arise from the “black box” of climate models. If the amount of warming was calculated from the “simple, well known relationship between CO2 and solar energy spectrum absorption”, it would only be 0.5°C if the gas doubled in the atmosphere. This is due to the logarithmic nature of the relationship.

This hypothesis around the ‘saturation’ of greenhouses gases is contentious, but it does provide a more credible explanation of the relationship between CO2 and temperatures observed throughout the past. Levels of CO2 have been 10-15 times higher in some geological periods, and the Earth has not turned into a fireball.

Chapman goes into detail about how climate models work, and a full explanation is available here. Put simply, the Earth is divided into a grid of cells from the bottom of the ocean to the top of the atmosphere. The first problem he identifies is that the cells are large at 100×100 km2. Within such a large area, component properties such as temperature, pressure, solids, liquids and vapour are assumed to be uniform, whereas there is considerable atmospheric variation over such distances. The resolution is constrained by super-computing power, but an “unavoidable error” is introduced, says Chapman, before they start.

Determining the component properties is the next minefield and lack of data for most areas of the Earth and none for the oceans “should be a major cause for concern”. Once running, some of the changes between cells can be calculated according to the laws of thermodynamics and fluid mechanics, but many processes such as impacts of cloud and aerosols are assigned. Climate modellers have been known to describe this activity as an “art”. Most of these processes are poorly understood, and further error is introduced.

Another major problem occurs due to the non-linear and chaotic nature of the atmosphere. The model is stuffed full of assumptions and averaged guesses. Computer models in other fields typically begin in a static ‘steady state’ in preparation for start-up. However, Chapman notes: “There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.” Finally, despite all the flaws, climate modellers try to ‘tune’ their results to match historical trends. Chapman gives this adjustment process short shrift. All the uncertainties mean there is no unique match. There is an “almost infinite” number of way to match history. The uncharitable might argue that it is a waste of time, but of course suitable scary figures are in demand to push the command-and-control Net Zero agenda.

It is for these reasons that the authors of the World Climate Declaration, stating that there is no climate emergency, said climate models “have many shortcomings and are not remotely plausible as global policy tools”. As Chapman explains, models use super-computing power to amplify the interrelationships between unmeasurable forces to boost small incremental CO2 heating. The model forecasts are then presented as ‘primary evidence’ of a climate crisis.

Climate models are also at the heart of so-called ‘attribution’ attempts to link one-off weather events to long-term changes in the climate. This pseudoscience climate industry has grown in recent years as global warming goes off the boil, and is largely replaced with attempts to catastrophise every unusual natural weather event or disaster. Again, put simply, the attribution is arrived at by comparing an imaginary climate with no human involvement with another set of guesses assuming the burning of fossil fuel. These days, every eco loon holding up traffic on the M25 to the grandest fear-spreader at COP27 is over-dosing on event attribution stories.

In his recent best-selling book Unsettled, Steven Koonin, President Obama’s Under-Secretary for Science, dismissed attribution studies out-of-hand. As a physical scientist, he wrote, “I’m appalled that such studies are given credence, much less media coverage”. A hallmark of science is that conclusions get tested against observations, and that is virtually impossible for weather attribution studies. “It’s like a spiritual adviser who claims her influence helped you win the lottery – after you’ve already won it,” he added.

Chris Morrison is the Daily Sceptic’s Environment Editor.

https://t.co/XRbPeckiiH
 
.
Lies, Damn Lies and Climate Models.

Global extinction due to global warming has been predicted more times than the Labour Party has claimed it can cool the planet with a new tax. But where do these predictions come from? If you thought it was just calculated from the simple, well known relationship between CO2 and solar energy absorption, you would only expect to see about 0.5C increase from pre-industrial temperatures as a result of CO2 doubling, due to the logarithmic nature of the relationship.

The runaway 3-6C and higher temperature increase model predictions depend on coupled feedbacks from many other factors, including water vapour (the most important greenhouse gas), albedo (the proportion of energy reflected from the surface – e.g. more/less ice or clouds, more/less reflection) and aerosols, just to mention a few, which theoretically may amplify the small incremental CO2 heating effect.

“The world has less than a decade to change course to avoid irreversible ecological catastrophe, the UN warned today.” — The Guardian, Nov 28, 2007

“It’s tough to make predictions, especially about the future.” — Yogi Berra
______________________

Because of the complexity of these interrelationships, the only way to make predictions is with climate models. But are they fit for purpose? Before I answer that question, let’s have a look at how they work.

How do Climate Models Work?

In order to represent the earth in a computer model, a grid of cells is constructed from the bottom of the ocean to the top of the atmosphere. Within each cell, the component properties, such as temperature, pressure, solids, liquids and vapour are uniform.

The size of the cells varies between models and within models. Ideally, they should be as small as possible, as properties vary continuously in the real world, but the resolution is constrained by computing power. Typically, the cell area is around 100×100 km2 even though there is considerable atmospheric variation over such distances, requiring all the cell properties to be averaged. This introduces an unavoidable error into the models even before they start to run.

The number of cells varies between models, but the order of magnitude is around 2 million.

Once the grid has been constructed, the component properties of each these cells must be determined. There aren’t, of course, two million data stations in the atmosphere and ocean. The current number of data points is around 10,000 (ground weather stations, balloons and ocean buoys), plus we’ve had satellite data since 1978, but historically the coverage is poor. As a result, when initialising a climate model starting 150 years ago, there is almost no data available for most of the land surface and oceans, and nothing above the surface or in the ocean depths. This should be understood to be a major concern.

Once initialised, the model goes through a series of timesteps. At each step, for each cell, the properties of the adjacent cells are compared. If one such cell is at a higher pressure, fluid will flow from that cell to the next. If it is at higher temperature, it warms the next cell (whilst cooling itself). This might cause ice to melt, but evaporation has a cooling effect. If ice melts, there is less energy reflected and that causes further heating. Aerosols in the cell can result in heating or cooling and an increase or decrease in precipitation, depending on the type.

Increased precipitation can increase plant growth, as does increased CO2. This will change the albedo (reflectivity) of the surface as well as the humidity. Higher temperatures cause greater evaporation from oceans which cools the oceans and increases cloud cover. Climate models can’t model clouds due to the low resolution of the grid, and whether clouds increase surface temperature or reduce it, depends on the type of cloud.

Of course, this all happens in three dimensions and to every cell, resulting in lots of feedback to be calculated at each timestep.It’s complicated!

The timesteps can be as short as half an hour. Remember, the terminator, the point at which day turns into night, travels across the earth’s surface at about 1700 km/hr at the equator, so even half hourly timesteps introduce further error into the calculation. Again, computing power is a constraint.

While the changes in temperatures and pressures between cells are calculated according to the laws of thermodynamics and fluid mechanics, many other changes aren’t calculated. They rely on parameterisation. For example, the albedo forcing varies from icecaps to Amazon jungle to Sahara desert to oceans to cloud cover and all the reflectivity types in between. These properties are simply assigned and their impacts on other properties are determined from look-up tables, not calculated. Parameterisation is also used for cloud and aerosol impacts on temperature and precipitation. Any important factor that occurs on a subgrid scale, such as storms and ocean eddy currents, must also be parameterised with an averaged impact used for the whole grid cell. Whilst impacts of these factors are based on observations, the parameterisation is far more a qualitative rather than a quantitative process, and often described by modelers themselves as an art, that introduces further error. Direct measurement of these effects and how they are coupled to other factors is extremely difficult.

Within the atmosphere in particular, there can be sharp boundary layers that cause the models to crash. These sharp variations have to be smoothed.

Energy transfers between atmosphere and ocean are also problematic. The most energetic heat transfers occur at subgrid scales that must be averaged over much larger areas.

Cloud formation depends on processes at the millimeter level and are impossible to model. Clouds can both warm as well as cool enough to completely offset the doubled CO2 effect. Any warming increases evaporation (which cools the surface), resulting in an increase in cloud particles. All these effects must be averaged in the models.

When the grid approximations are combined with every timestep, further errors are introduced — and with half-hour timesteps over 150 years, that’s over 2.6 million timesteps! Unfortunately, these errors aren’t self-correcting, instead accumulating over the model run. But there is a technique that climate modelers use in their attempts to overcome this, which I will describe shortly. [4]

Model Initialisation

After the construction of any computer model, there is an initalisation process whereby the model is checked to see whether the starting values in each of the cells are physically consistent with one another. For example, if you are modelling a bridge to see whether the design will withstand high winds and earthquakes, you make sure that before you impose any external forces onto the model structure, that it meets all the expected stresses and strains of a static structure. After all, if the initial conditions of your model are incorrect, how can you rely on it to predict what will happen when external forces are imposed in the model?

Fortunately, for most computer models, the properties of the components are quite well known and the initial condition is static, the only external force being gravity. If your bridge doesn’t stay up on initialisation, there is something seriously wrong with either your model or design!

With climate models, we have two problems with initialisation. Firstly, as previously mentioned, we have very little data for time zero, whenever we chose that to be. Secondly, at time zero, the model is not in a static steady state as is the case for pretty much every other computer model that has been developed. At time zero, there could be a blizzard in Siberia, a typhoon in Japan, a nice day in the UK, monsoons in Mumbai and a heatwave in southern Australia, not to mention the odd volcanic explosion, which could all be gone in a day or so.

There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.

The best climate modelers can hope for is that their bright and shiny latest model doesn’t crash in the first few timesteps.

The climate system is chaotic which essentially means any model will be a poor predictor of the future – you can’t even make a model of a lottery-ball machine (which is a comparatively a much simpler and smaller interacting system) and use it to predict the outcome of the next draw.

So, if climate models are populated with little more than educated guesses instead of actual observational data at time zero, and errors accumulate with every timestep, how do climate modelers address this problem?

History matching

If the system that’s being computer modelled has been in operation for some time, you can use that data to tune the model and then start the forecast before that period finishes to see how well it matches before making predictions. Unlike other computer modelers, climate modelers call this ‘hindcasting’ because it doesn’t sound like they are fudging the model to fit the data.

Even though climate model construction has many flaws, such as large grid sizes, patchy data of dubious quality in the early years, and poorly understood physical phenomena driving the climate that has been parameterised, the theory is that you can tune the model during hindcasting within parameter uncertainties to overcome all these deficiencies.

But, while it’s true that you can tune the model to get a reasonable match with at least some components of history, the match isn’t unique. When computer models were first being used last century, the famous mathematician, John Von Neumann, said:

with four parameters I can fit an elephant, with five I can make him wiggle his trunk

In climate models there are hundreds of parameters that can be tuned to match history. What this means is there is an almost infinite number of ways to achieve a match. Yes, many of these are non-physical and are discarded, but there is no unique solution as the uncertainty on many of the parameters is large and as long as you tune within the uncertainty limits, innumerable matches can still be found.

An additional flaw in the history-matching process is the length of some of the natural cycles. For example, ocean circulation takes place over hundreds of years, and we don’t even have 100 years of data with which to match it.

So, can the history-matching constrain the accumulating errors that inevitably occur with each model’s timestep?

Forecasting

Consider a shotgun. When the trigger is pulled, the pellets from the cartridge travel down the barrel, but there is also lateral movement of the pellets. The purpose of the shotgun barrel is to dampen the lateral movements and to narrow the spread when the pellets leave the barrel. It’s well known that shotguns have limited accuracy over long distances, that the shot pattern that grows with distance from the muzzle.

The history-match period for a climate model is like the barrel of the shotgun. So what happens when the model moves from matching to forecasting mode?[5]

Like the shotgun pellets leaving the barrel, numerical dispersion takes over in the forecasting phase. Each of the 73 models in Figure 5 has been history-matched, but outside the constraints of the matching period, they quickly diverge.

At most only one of these models can be correct, but more likely, none of them are. If this was a real scientific process, the hottest two-thirds of the models would be rejected by the International Panel for Climate Change (IPCC), and further study focused on the models closest to the observations. But they don’t do that for a number of reasons.

Firstly, if they reject most of the models, there would be outrage amongst the climate scientist community, especially from the rejected teams due to their subsequent loss of funding. More importantly, the infamous, much vaunted and entirely spurious ’97 per cent consensus’ would evaporate.

Secondly, once the hottest models were rejected, the forecast for 2100 would be about 1.5o C increase (due predominately to natural warming) and there would be no panic, and the climateers rivers of gold would cease to flow and the gravy train derail. Climate modellers have mortgages too, you know.

So how should the IPPC reconcile this wide range of forecasts?

Imagine that you wanted to know the value of bitcoin 10 years hence so you can make an investment decision today. You could consult an economist, but we all know how useless their predictions are. So instead, you consult an astrologer, but you worry whether you should bet all your money on a single prediction. Just to be safe, you consult 100 astrologers, but they give you a very wide range of predictions. Well, what should you do? You could do what the IPCC does, and just average all the predictions.

Simply put, you can’t improve the accuracy of garbage by averaging it.

An Alternative Approach

Climate modelers claim that a history-match isn’t possible without including CO2 forcing. This is may be true using the approach described here with its many approximations, and only tuning the model to a single benchmark (surface temperature) while ignoring deviations from others (such as tropospheric temperature), but analytic (as opposed to numeric) models have achieved matches without CO2 forcing. These are models, based purely on historic climate cycles that identify the harmonics using a mathematical technique of signal analysis, which deconstructs long and short term natural cycles of different periods and amplitudes without considering changes in CO2 concentration.

In Figure 6, a comparison is made between the IPCC predictions and a prediction from just one analytic harmonic model that doesn’t depend on CO2 warming. A match to history can be achieved through harmonic analysis and provides a much more conservative prediction that forecasts the current pause in temperature increase, unlike the IPCC models. The purpose of this example isn’t to claim that this model is more accurate — it is, after all, just another model — but to dispel the myth that there is no way history can be explained without anthropogenic CO2 forcing and to show that it’s possible to explain the changes in temperature with natural variation as the predominant driver.[6]

In summary:

♦ Climate models can’t be validated on initiatialisation due to lack of data and a chaotic initial state.

♦ Model resolutions are too low to represent many climate factors.

♦ Many of the forcing factors are parameterised as they can’t be calculated by the models.

♦ Uncertainties in the parameterisation process mean that there is no unique solution to the history matching.

♦ Numerical dispersion beyond the history matching phase results in a large divergence in the models.

♦ The IPCC refuses to discard models that don’t match the observed data in the prediction phase – which is almost all of them.

The question now is, do you have the confidence to invest trillions of dollars and reduce standards of living for millions of people in a bid to stop global warming as predicted by climate modellers? Or should we just adapt to the natural changes as we always have?

Greg Chapman (PhD Physics) is a former (non-climate) computer modeler.

https://quadrant.org.au/opinion/doomed-planet/2022/10/garbage-in-climate-science-out/
 
Last edited:
.

If you cannot make a model to predict the outcome of the next draw from a lottery ball machine, you are unable to make a model to predict the future of the climate, suggests former computer modeller Greg Chapman, in a recent essay in Quadrant. Chapman holds a PhD in physics and notes that the climate system is chaotic, which means “any model will be a poor predictor of the future”. A lottery ball machine, he observes, “is a comparatively much simpler and smaller interacting system”.

Most climate models run hot, a polite term for endless failed predictions of runaway global warming. If this was a “real scientific process’” argues Chapman, the hottest two thirds of the models would be rejected by the International Panel for Climate Change (IPCC). If that happened, he continues, there would be outrage amongst the climate scientists community, especially from the rejected teams, “due to their subsequent loss of funding”. More importantly, he added, “the so-called 97% consensus would instantly evaporate”. Once the hottest models were rejected, the temperature rise to 2100 would be 1.5°C since pre-industrial times, mostly due to natural warming. “There would be no panic, and the gravy train would end,” he said

As COP27 enters its second week, the Roger Hallam-grade hysteria – the intelligence-insulting ‘highway to hell’ narrative – continues to be ramped up. Invariably behind all of these claims is a climate model or a corrupt, adjusted surface temperature database. In a recent essay also published in Quadrant, the geologist Professor Ian Plimer notes that COP27 is “the biggest public policy disaster in a lifetime”. In a blistering attack on climate extremism, he writes:

We are reaping the rewards of 50 years of dumbing down education, politicised poor science, a green public service, tampering with the primary temperature data record and the dismissal of common sense as extreme right-wing politics. There has been a deliberate attempt to frighten poorly-educated young people about a hypothetical climate emergency by the mainstream media, uncritically acting as stenographers for green activists.

In his detailed essay, Chapman explains that all the forecasts of global warming arise from the “black box” of climate models. If the amount of warming was calculated from the “simple, well known relationship between CO2 and solar energy spectrum absorption”, it would only be 0.5°C if the gas doubled in the atmosphere. This is due to the logarithmic nature of the relationship.

This hypothesis around the ‘saturation’ of greenhouses gases is contentious, but it does provide a more credible explanation of the relationship between CO2 and temperatures observed throughout the past. Levels of CO2 have been 10-15 times higher in some geological periods, and the Earth has not turned into a fireball.

Chapman goes into detail about how climate models work, and a full explanation is available here. Put simply, the Earth is divided into a grid of cells from the bottom of the ocean to the top of the atmosphere. The first problem he identifies is that the cells are large at 100×100 km2. Within such a large area, component properties such as temperature, pressure, solids, liquids and vapour are assumed to be uniform, whereas there is considerable atmospheric variation over such distances. The resolution is constrained by super-computing power, but an “unavoidable error” is introduced, says Chapman, before they start.

Determining the component properties is the next minefield and lack of data for most areas of the Earth and none for the oceans “should be a major cause for concern”. Once running, some of the changes between cells can be calculated according to the laws of thermodynamics and fluid mechanics, but many processes such as impacts of cloud and aerosols are assigned. Climate modellers have been known to describe this activity as an “art”. Most of these processes are poorly understood, and further error is introduced.

Another major problem occurs due to the non-linear and chaotic nature of the atmosphere. The model is stuffed full of assumptions and averaged guesses. Computer models in other fields typically begin in a static ‘steady state’ in preparation for start-up. However, Chapman notes: “There is never a steady state point in time for the climate, so it’s impossible to validate climate models on initialisation.” Finally, despite all the flaws, climate modellers try to ‘tune’ their results to match historical trends. Chapman gives this adjustment process short shrift. All the uncertainties mean there is no unique match. There is an “almost infinite” number of way to match history. The uncharitable might argue that it is a waste of time, but of course suitable scary figures are in demand to push the command-and-control Net Zero agenda.

It is for these reasons that the authors of the World Climate Declaration, stating that there is no climate emergency, said climate models “have many shortcomings and are not remotely plausible as global policy tools”. As Chapman explains, models use super-computing power to amplify the interrelationships between unmeasurable forces to boost small incremental CO2 heating. The model forecasts are then presented as ‘primary evidence’ of a climate crisis.

Climate models are also at the heart of so-called ‘attribution’ attempts to link one-off weather events to long-term changes in the climate. This pseudoscience climate industry has grown in recent years as global warming goes off the boil, and is largely replaced with attempts to catastrophise every unusual natural weather event or disaster. Again, put simply, the attribution is arrived at by comparing an imaginary climate with no human involvement with another set of guesses assuming the burning of fossil fuel. These days, every eco loon holding up traffic on the M25 to the grandest fear-spreader at COP27 is over-dosing on event attribution stories.

In his recent best-selling book Unsettled, Steven Koonin, President Obama’s Under-Secretary for Science, dismissed attribution studies out-of-hand. As a physical scientist, he wrote, “I’m appalled that such studies are given credence, much less media coverage”. A hallmark of science is that conclusions get tested against observations, and that is virtually impossible for weather attribution studies. “It’s like a spiritual adviser who claims her influence helped you win the lottery – after you’ve already won it,” he added.

Chris Morrison is the Daily Sceptic’s Environment Editor.

https://t.co/XRbPeckiiH

This deserves to be bumped!
 
.

This hypothesis around the ‘saturation’ of greenhouses gases is contentious, but it does provide a more credible explanation of the relationship between CO2 and temperatures observed throughout the past. Levels of CO2 have been 10-15 times higher in some geological periods, and the Earth has not turned into a fireball.

https://t.co/XRbPeckiiH

Nicely explained, Cancel2.

Sadly white lib nazi scum are still unable to comprehend it.
 
.
Lies, Damn Lies and Climate Models.

Once the grid has been constructed, the component properties of each these cells must be determined. There aren’t, of course, two million data stations in the atmosphere and ocean. The current number of data points is around 10,000 (ground weather stations, balloons and ocean buoys), plus we’ve had satellite data since 1978, but historically the coverage is poor. As a result, when initialising a climate model starting 150 years ago, there is almost no data available for most of the land surface and oceans, and nothing above the surface or in the ocean depths. This should be understood to be a major concern.

https://quadrant.org.au/opinion/doomed-planet/2022/10/garbage-in-climate-science-out/

Absolutely.
 
Because models are not like a lottery machine. It's an idiotic comparison.

Trust you to focus on that rather than the sheer excellence of the rest of that article.

attachment.php
 

Attachments

  • wheel.jpg
    wheel.jpg
    25.4 KB · Views: 11
Trust you to focus on that rather than the sheer excellence of the rest of that article.

There is no need to read the rest because the first sentence invalidates the whole article.

"If you cannot make a model to predict the outcome of the next draw from a lottery ball machine, you are unable to make a model to predict the future of the climate, suggests former computer modeller Greg Chapman, in a recent essay in Quadrant."

1. Lottery numbers are 100% random.
2. What does a computer modeller know?
3. If it is useless, how are the meteorologists able to make predictions?
 
There is no need to read the rest because the first sentence invalidates the whole article.

"If you cannot make a model to predict the outcome of the next draw from a lottery ball machine, you are unable to make a model to predict the future of the climate, suggests former computer modeller Greg Chapman, in a recent essay in Quadrant."

1. Lottery numbers are 100% random.
2. What does a computer modeller know?
3. If it is useless, how are the meteorologists able to make predictions?

You're confusing short term weather prediction models with long term CMIP6 climate models which are a totally different kettle of fish. In fact weather predictions tend to work by comparing the present with past weather patterns. I really shouldn't have to tell you this but you do have a habit of fixating on the inconsequential.
 
Every asshole has an opinion! Thank you for sharing yours!

You've turned into a clichebot, time to order up some new ones. I haven't forgotten how you declared with all the authority you could muster that the Kansas oil spill would take months or even years to be cleaned up. In fact it took only 3 weeks before the pipeline was returned to service, I still can't stop laughing about that.
 
Last edited:
You're confusing short term weather prediction models with long term CMIP6 climate models which are a totally different kettle of fish. In fact weather predictions tend to work by comparing the present with past weather patterns. I really shouldn't have to tell you this but you do have a habit of fixating on the inconsequential.

There are plenty of data sets. It is clear you have no clue how computer modelling works.
 
Back
Top