The Center for Space Environment Modeling (CSEM) is an interdisciplinary research organization of the College of Engineering at the University of Michigan. CSEM is comprised of a tightly integrated group of faculty, students and staff from the Departments of Aerospace Engineering, Climate and Space Sciences and Engineering, and Electrical Engineering and Computer Science.
The overall goal of CSEM is to develop high-performance, first-principles based computational models to describe and predict hazardous conditions in the near-earth space environment extending from the sun to the ionosphere, called space weather. In order to achieve predictive capability, the models must run considerably faster than real time on mid-size parallel computers.
CSEM is located in the College of Engineering of the University of Michigan. It is administered through the Space Physics Research Laboratory of the Department of Climate and Space Sciences and Engineering.
The Director of CSEM is Professor Tamas Gombosi of the Department of Climate and Space Sciences and Engineering. He is assisted by co-directors Kenneth Powell of the Department of Aerospace Engineering and Quentin Stout of the Department of Electrical Engineering and Computer Science.
In the late 1980s there were three independent research directions that formed the foundations of the Center for Space Environment Modeling (CSEM). In the College of Engineering there was an interdepartmental effort to utilize parallel computer architectures for scientific computing. One of the leaders of this effort was Quentin Stout of the Electrical Engineering and Computer Science (EECS) department who was mainly interested in broadening the use of parallel computers. In 1986 the Aerospace Engineering (AE) department hired Bram van Leer, one of the founding fathers of modern computational fluid dynamics (CFD), who made seminal contributions to the development of high-resolution finite volume methods that combine the use of high-order reconstruction with limiters and Riemann solvers. These methods turned out to be particularly robust and accurate for high-speed aerodynamics. The third research effort was carried out by Tamas Gombosi who was interested in numerical simulations of the terrestrial polar wind (high speed plasma outflow from the high-latitude ionosphere) and cometary atmospheres where neutral gas, dust and plasma flows form a highly dynamic, complex coma. Dr. Gombosi used first-order Godunov schemes to model the high-speed complex flows in the ionosphere and in the vicinity of comets. These Godunov schemes form the basis of the modern high-resolution methods Dr. van Leer pioneered.
Gombosi and van Leer met at a party sometimes in 1987 and they started a polite conversation about their research interests. When Gombosi mentioned that he was using Godunov schemes to solve complex transonic flow problems, van Leer smiled and simply stated: “I am the greatest expert in Godunov schemes.” This is how the collaboration started between the two of them. This collaboration significantly expanded by the recruitment of Philip Roe (1990) and the inclusion of Kenneth Powell. Prof. Roe is one of the most original thinkers in CFD who – among other things – invented the so-called “Roe-solver,” an innovative Riemann solver that minimizes dissipation. Dr. Powell joined the Aerospace department in 1987 right after graduating from MIT. His particular research area was adaptive mesh refinement (AMR), a major ingredient of modern large-scale simulation methods.
The first joint project between Gombosi and van Leer was the implementation of higher-order numerical methods in dusty gas flow simulations and subsequently, in simulations of the polar wind. The collaboration significantly broadened around 1990, when Roe, Powell and Stout joined the group. There were two major developments at this early that determined the further direction of the collaboration.
Encouraged by the initial success of the collaboration, in 1992 the team submitted proposals to the high-performance computing and communications (HPCC) initiatives at NASA and NSF. The abstract of the proposals were the following:
“An interdisciplinary effort is proposed to use the power of massively parallel computers to develop the first multi-scale model of the heliosphere-magnetosphere-ionosphere system. This model will represent a dramatic step forward in Geospace Environment Modeling (GEM). The main features of the proposed effort are:
In 1992 this idea was extremely ambitious and it was completely trashed by the reviewers. Here are some of the most salient reviews:
The proposals were rejected.
There was a second HPCC competition announced by NSF in 1993. Undeterred by the negative reaction to our first attempt, the proposal was revised and resubmitted. By this time we were showing preliminary simulation results at meetings and the proposal received a much better set of reviews than the first one. As it turned out, our proposal was the highest rated non-funded proposal. Fortunately, however, the NSF Program Director for magnetospheric physics, Dr. Tim Eastman, recognized the potential of the proposed model and he solicited support from the NASA HPCC and the Air Force’s space science programs. In early 1994 we received an interagency seed grant jointly funded by the NSF Magnetosphere Program, NASA’s HPCC program and the AFOSR Space Science Program. This enabled us to hire the first full time scientist for the new AMR MHD project. He was Dr Darren De Zeeuw, a fresh graduate of the Aerospace Engineering Department at Michigan. His thesis was the development of an unstructured quadtree 2D AMR grid for the Euler equations. His thesis committee was chaired by Prof. Powell, and the members of the committee were Profs. Gombosi, Roe and van Leer.
In less than a year we published the first paper with the application of our newly developed 2D AMR MHD code to cometary plasmas. Predictably, the reaction to the emergence of a new global MHD simulation group was mixed. The competition was highly critical of the new numerical approach and they predicted that this method would never work in 3D, and particularly not for strongly magnetized objects (like the Earth or the Sun). Needless to say, they turned out to be wrong. The first 3D cometary plasma simulation was published in 1996, shortly followed by the first simulations of the terrestrial magnetosphere.
The first 3D MHD code was still written for serial computers. It used an unstructured octree AMR grid with cut cells to account for arbitrary central bodies. It represented a great step forward from the present state-of-the-art, but it also had limitations. The main limitation was that the data structure was not appropriate for massively parallel architectures. However, our global AMR MHD technology was ready for the next step. This came with the second NASA HPCC competition in 1996.
In 1995 the NASA HPCC program issued a solicitation for Round 2 proposals. The goal of this round was to create the first generation of physics models for parallel computers. The solicitation explicitly stated that the codes must meet annual performance goals of 10 GFlops, 50 GFlops and 100 GFlops on NASA provided machines. While writing our proposal we made several strategic decisions that pretty much determined to direction of the Michigan code development effort for a long time:
The Michigan proposal was highly ranked by the review panel, but there was strong internal opposition at NASA’s Goddard Space Flight Center that was responsible for the management of the HPCC project. Here are some examples of the internal comments:
In spite of the opposition our proposal was selected for funding. The resulting code was called “Block-Adative Tree Solar-wind Roe-type Upwind Scheme” or BATS-R-US. The project was a spectacular success. By the end of the first year the code achieved 13 GFlops on the 512 processor Cray T3D machine purchased by the NASA HPCC project. By the end of the three-year funding period BATS-R-US performed 345 GFlops on a 512 processor Cray T3E computer. BATS-R-US was a success and it was here to stay.
In 2000 two major funding opportunities were announced by he Department of Defense (DoD) and by the NASA HPCC program. DoD’s Multidisciplinary University Research Initiative (MURI) program called for proposals “to develop predictive models of explosive solar activity and its effect on the near-Earth space environment.” At this time the MURI program provided 5 million dollars over five years, so this was a great opportunity to expand the applications of BATS-R-US from comets and the terrestrial magnetosphere to the heliosphere and the Sun.
The NASA HPCC program called for proposals to simultaneously increase interoperability and performance of grand challenge applications through the use of common software interfaces by broad communities of Earth, space, life, and microgravity science researchers. The objective of getting their codes to interoperate in common frameworks while simultaneously achieving high performance was viewed as a step to the next generation of production-ready software for many scientific communities. Specifically, the call for proposals asked for the development of high-performance scalable parallel codes from Grand Challenge Investigations that can achieve code interoperation and high performance simultaneously.
In response to these solicitations we proposed the development of a Space Weather Modeling Framework (SWMF) and basic physics domain models extending from the Sun to the Earth. The NASA HPCC proposal focused on the framework, while the DoD MURI proposal focused on the development of high-performance physics models. The proposed SWMF was modular with individual spatial regions and physical processes described by exchangeable interoperating modules. In this approach each adaptive physics domain has its own grid structure, set of dependent variables and set of governing conservation laws. The adaptive physics domains can overlap each other, or they can interface through a boundary surface.
We were delighted when both our MURI and NASA HPCC proposals were selected for funding. This gave us the opportunity to develop the SWMF framework and its basic physics models simultaneously. By 2006 the main elements of SWMF and a highly improved BATS-R-US were in place.
SWMF was designed in 2001 and had been developed to integrate and couple several independently developed space physics codes into a flexible and efficient model. Figure 1 shows the schematics of the SWMF and its components. The main design goals of the SWMF were to minimize changes in the original models, provide a general, flexible and efficient method to execute the coupled models on massively parallel distributed memory machines, and to allow adding new components and new physics models for existing components with ease. The SWMF was written in Fortran 90 and Perl and has been designed in an object-oriented manner from the beginning. Data naming conventions and the use of CVS were adopted from the beginning. Rigorous testing procedures have been applied from the beginning as well. Most of the implemented modules contain unit tests. The components are tested individually, and the functionality of the whole SWMF is tested by system tests exercising multiple coupled components. The SWMF and its components pass this comprehensive hierarchical test suite every night on several computer and compiler platforms. The SWMF source code is well documented. The reference manual is generated from the source code using Perl scripts. All input parameters are documented in XML files, which are converted into the user manual. We have also developed a graphical user interface (SWMF GUI) to improve the usability of this complex software.
We are regularly extending the SWMF with new components and new component versions. It takes typically two weeks of work from two developers to add a new component to the SWMF. Most of this time is spent on adopting the physics model to the SWMF and on writing and testing the couplers between the components. New components that do not use Cartesian grids can take longer if they require high-performance grid rendezvous algorithms. The SWMF currently covers 10 physics domains, many of them represented with multiple models. The SWMF is capable of simulating the Sun-Earth system from the solar corona to the upper atmosphere of the Earth faster than real time on hundreds of processors.
The following were references used in the CSEM History above.