Thursday, February 19, 2009

Global Warming and Computer Models

I have no expertise in the science of climatology, so I really can't do more than regurgitate my favorite facts and figures. There is already enough regurgitation on this topic.

I do have over a decade of experience with scientific computer models. I'll share some of my observations and let you draw your own conclusions. A key to reading this is that if you come across technical mumbo-jumbo, just read it as "bla bla bla". I'm only including the technical descriptions to provide some legitimacy.

What is a scientific computer model?


A scientific computer model is different from Hollywood special effects or video games in that it needs to do more than just look cool. It needs to take inputs and provide outputs that can be verified by real world measurements.

Validation


My first experience with modeling was at the Center for Laser Studies at the University of Southern California. We were trying to develop optical switches and were experimenting with semiconductor quantum well structures (Remember, you can read this as "bla bla bla"). I created a software model of the quantum wells using Airy's functions. Hughes Malibu Research labs fabricated thin films of alternating layers of GaAs/AlGaAs to create the quantum wells. We used one laser for the signal and another as the switch. We took data and compared the results. I felt good about the agreement between theory and measurement even though it wasn't perfect (since the model couldn't possibly include all variables).

Professional Courtesy


Many professions have a different standard when judging themselves. Police don't give other police officers tickets. I worked in a Physics lab in graduate school and my adviser said that all scientists should support superconducting super colliders (even if you think they are a waste of money, like I do) since "Money for science is hard to come by and we should support any money spent on science". How many scientists support the theory of Global Warming? More importantly, why?

Management Substitutes Money for Brains


There's a joke about the junior scientist assigned to use a computer model. He's asked by the lead scientist how the model is and he calls it:
"A bunch of crap!!!"
The lead scientist realizes that he can't say crap to his boss so he calls it
"Cow manure"
His boss realizes he can't say manure to the director, so he calls it
"Fertilizer"
The director can't say "fertilizer" to the board of directors, so he says
"The new computer model promotes growth"


I started my first real job in the Infrared/Visual/Acoustics group and we were modeling the infrared or "heat" signature of the Stealth Bomber. My first job was to learn and use the CAVITY computer model. The tailpipe on the airplane was designed so that you couldn't see the hot turbine engine, however some of the radiating heat would reflect out. We weren't confident in the model so we tried a geometry with 500, 1000, 1500 polygons and got different results every time. Our management said "Get a bigger computer. Try 5000 and 10000 polygons to make it work". Or in other words, throw money at it. My mentor, with his PhD in Physics, had a better idea. We used a box of 5 polygons. We could perform hand calculations and compare the results. This I did and discovered a bug in CAVITY. Basically, they got the rows and columns mixed up in a matrix operation. When we fixed it, it worked great. We ran it with 500, 1000, 1500 polygons and they all basically agreed.

PhDs are always right, even when their wrong


My next assignment was to develop the Infrared Workstation to model and analyze flight test data. Part of our contract was to validate our models with flight test data so that they could be used for difficult to test points (i.e., Subarctic Winter Night). Many methods were used to get carefully calibrated data. I determined the conditions of the flight test and with our models generated synthetic images to compare to the real data. For the most part there was good agreement. The problems were at grazing angles (Think of putting your eye against the wall or table. As you look along it, it appears more reflective then looking straight at the surface). The reflectance provided by our lab was measured at the wrong angles. I pointed this out to the PhD lab director (One of the smartest people I have ever met). He disputed that his results were 99.99% accurate. I replied that they were wrong for what we were modeling. He replied "Of Course, but they are still 99.99% right). This PhD had to be right.


Garbage In, Garbage Out


I traveled all over the U.S. attending infrared modeling conferences: Austin TX, Colorado Springs & Boulder Colorado, Tennessee, Rome New York, Massachusetts, Dayton OH, etc. State of the art modeling techniques were presented at these conferences. An unconventional presentation was given at one meeting. The Air Force wanted to see if they could get the same results using non-experts running the models. The results were discouraging. Two people with the same training got different results.

Pick Your Inputs, Pick Your Results


Later, I was assigned to a new program. Each program had a specification, or numbers you would have to meet for certain conditions. For example, "The airplane shall have a MWIR signature of less than 45 Watts from a front looking aspect at a Mid-Latitude Summer Day standard atmospheric condition."

We struggled with the design for weeks to get into spec, but with no luck. Then the Air Force changed the spec to use the "1976 Standard Atmosphere" instead of "Mid-Latitude Summer Day" ). We reran the model and everything was in spec!!! Good news for us unless they decide to arbitrarily change the model back. You can really influence the outputs based on assumptions for the inputs. Unfortunately we didn't have control of these inputs.

Modeling Instability


My mentor told me about the famous "Butterfly Effect". Basically, someone was running a long simulation and something happened that made them want to repeat the results. They looked back through the reams of paper and used the numbers for an earlier time to restart the model. At first both runs agreed. Then they started deviating drastically. After scratching their heads for a while, they figured that the numbers on the printout were slightly different that what was in the computer. They surmised that this was the equivalent of a butterfly flapping it's wings in the U.S. which then results in a monsoon in China months later.

Most recently I worked with flight control engineers who were designing a new simulation. Some of the modeled flight paths were physically impossible. When we finally fixed the flight code and simulation, the aircraft behaved as expected. Running a simulation over a long period magnifies greatly any errors. The "Butterfly Effect" is NOT a tool for a brilliant, mad scientist to take over the world with highly trained butterflies. Instead, the "Butterfly Effect" reveals instabilities in complex models.

This was my experience. I'm sure none of these things have happened while using models to prove Global Warming ;-)
Share/Save/Bookmark

No comments: