Climate Change, Chapter 9


Chapter 9 – Computer Model Predictions

By Michael Belsick

Everything previously mentioned leads us to a discussion on computers, computer programming, and using computers to make climate predictions.  To start, I must begin the discussion by saying that many people give far more credit to computers than what computers deserve.  That is mainly because most people don’t understand computers and how they can do what they do for us.

Computers are not “all knowing”.  They only know what they have been programmed to know.  As such, Artificial Intelligence is still in its infancy.  While computers have beaten chess masters, it could be said that the computers “cheated”.  By that I mean that for every movement that the chess master makes, the computer can almost instantaneously calculate the next logical move that the chess master will make by running thousands of simulations in seconds.  If you remember the 1983 movie called “War Games” with Matthew Broderick, the teenager saves the world by having the computer that controls all US nuclear missiles play tic-tac-toe with itself millions of times, eventually “learning” that tic-tac-toe is a game that can never be won.  The point that I am making is that the computer’s advantage over man is not yet intelligence. It is the ability to perform repetitive calculations over and over again in an instant.

When I type a word that is misspelled, the computer instantly makes suggestions based upon how I misspelled the word.  The computer does not really know what word I meant to use.  How many times has the computer accepted “form” instead of replacing it with “from”? When anyone types a question for Google to answer, the computer identifies key words in that question and looks for the closest reference that human programmers entered in its database.  In a way, a computer is like a librarian.  When you walk into a library wanting a book, but you are not sure of the title or author, the librarian will search the card catalog (old school) for something close to what you described.  Computers do the exact same thing by making the closest suggestion that match what programmers entered.  For another example, if you typed a question, such as Hank Aaron’s batting average, Goggle will reply instantly because human programmers entered that data. In fact, not only do you get the answer to what you asked, but Google also offers other options that you may want. However, if you meant Hank Aaron, who is your barber but was a high school baseball varsity player, Google will not provide the information that you seek.

One disadvantage of computers is that the computer’s response is based upon the knowledge and bias of the programmer.  During my research into this climate change series of articles, I used Google periodically.  Generally, the first responses supported the notion of man-made climate change.  The answer that I was interested in was after page 6 or so.  In some cases, Google circled back to the responses on page 1.  The computer itself had no bias, but the programmer did.

When man-made climate change believers tell you that computers are predicting a climate change catastrophe in 12 years, be very skeptical because those computers are basing that prediction on only what programmers input.  In college when I learned and used programming, there was a “golden rule” – “garbage in equals garbage out”.  Just to provide a whimsical example, I could use a computer to prove that all Democrats are lying.  I would have a computer monitor the mouth motions of a Democratic politician.  If I programmed the computer such that every time a mouth opened, a lie was stated, then the computer would calculate that the Democratic politician lied repeatedly.  So, any computer prediction is only as valid as the programming that was used.

Climate prediction programs are extremely complex. Although computers are capable of calculating a lot of different scenarios quickly, there is still limitation on how long it will take or the size of a model for realistic purposes. When mechanical engineers, who specialize in mechanical stress analysis, want to analyze what happens when a mechanical part is subjected to external mechanical loads, they build a computer model of the part.  Engineers build a 3-dimensional model of that part with maybe a thousand or more “node” points.  Each “node” point might be considered as the center block with sticks connecting it to other center blocks as in a 3-dimensional tinker toy.  Each node is linked mechanically to all adjacent nodes (up, down, and to each side).

Once the model is built, it is then programmed with how these nodes will interact on each other based on what the programmer thinks will occur. Another programming complexity is deciding which data and algorithms should be used. It is not possible to provide every possible interaction so programmers make assumptions and simplifications.  Most times programmers try to make valid assumptions and not show bias but as knowledge is gained, assumptions may need to be changed. At one time all of mankind knew that the sun revolved around the Earth… Algorithms are often simplified because it would either take too long to compute or we haven’t figured out exactly what happens

Take for example the landing gear of a jumbo jet simulating a “hard” landing. An impact load comparable to a “hard” landing is applied to the generated model. That impact force into the wheel is transferred to the landing gear, then travels to each node within the landing gear model, and finally exits the landing gear at the attachment point to the wing or fuselage.  If that shared impact load at any of these node points exceeds the mechanical properties of the material, then the part fails.  Design engineers, like me, would then need to redesign the landing gear to use a stronger metal alloy or to add more metal to the failed area such that there would be more node points to share this impact load.  The computer model is remade, and the program is run again until the model does not fail. If the interaction of the nodes is incorrectly assumed or the wrong material properties (ex., aluminum) are input into the model, the “behavior” of the landing gear will not represent the actual landing gear.

So, how is anything that I just said relevant to a climate change computer model?  Imagine the number of nodes and types of interactions required to represent the entire Earth. There are a lot of unknowns about climate parameters and interaction so assumptions and simplified algorithms must be made.

My landing gear was a relatively small part, maybe 4 feet long, with a thousand or more node points and might take an hour to run.  How would you analyze the entire Earth?  Your options would be to wait a couple of years while the computer runs the analysis or you would have to drastically increase the distance between node points.  That is exactly what climate change computer analysts have done.  Where my landing gear node may have been a cube of 0.1 inch in all directions, the climate change node likely would have to be a cube 10 to 100 miles in all directions.  So, to construct a model for the current computing capability computer could run the analysis in a reasonable amount of time, the climate change programmer had to have a node size so huge, that a significant sized thunder storm could run through the node without indicating any weather change at all. That represents the failure of climate change computer models to construct a model so monstrous in size that it is incapable to registering any minute weather or temperature changes. There is a significant limitation to any computer program trying to analyze something as large as the Earth.

In Chapter 6, I mentioned that clouds and their aerosol components play a major role in climate study. Yet, cloud behavior is usually ignored in climate models which are focused only on the CO2 concentration because it is easier.

For a final comment on computer modeling, the standard process for all programming is to run the model for a situation that is known to see if it properly predicts the known outcome.  This is called validating the model.  One writes the program, inputs the various values for the computer to calculate all the answers.  Then you compare the answers provided by the computer to the known results.  If the predicted values match the known values, then your model has been validated.  This is an important “proof” that the model will accurately predict the outcome.  To date, no climate model has ever successfully been validated.  This means that the computer program cannot even predict values that are known.  If that is the case, then how can the results of these climate predictions be considered definitive? As I previously said, “garbage in equals garbage out”.

In all the previous chapters, we have learned that there are many different natural forces that can affect the average Earth temperature.  Orbital mechanics can greatly affect how much solar energy reaches Earth by changing the distance that the energy had to travel.  While this can create significant differences, the normal time period of events is in the thousands of years.  The exception is Nutation which briefly shifts the rocking and swaying of tilt angle that happens about every 18.6 years.  We are safe for now.  On a much smaller time period, we learned that El Nino can affect the weather over just a few years.  However, climate scientists seemingly only want to focus on the amount of carbon dioxide in the atmosphere.  There are many other natural factors with greater potential for change than man-made carbon dioxide.  If climate predictions are only based upon atmospheric CO2, which is increasing, and that increased CO2 equates to increased temperature, then all these computer programs will certainly predict warmer Earth temperatures.  Additionally, if the programmers purposely omitted some information, the predicted results would be biased.  There really is another side to this debate.  In fact, there are many climate scientists that do not believe that a small change of CO2 has the power to cause global warming.

The most important question to address now has to do with why these man-made climate change scientists are so convinced and very vocal that any climate change is due to man-made CO2.  With that, I would like to end this chapter and go into the next chapter with a quote from the Climate Research Unit (CRU) which will be referenced again in the next chapter.  The quote below also summarizes what this chapter taught us.

GCMs (Global Climate Models) are complex, three-dimensional computer-based models of the atmospheric circulation.  Uncertainties in our understanding of climate process, the natural variability of the climate, and limitations of the CGMs mean that their results are not definite predictions of climate.”

Have any Question or Comment?

Leave a Reply

Your email address will not be published. Required fields are marked *