The Center for Space Environment Modeling (CSEM) is an interdisciplinary research organization of the College of Engineering at the University of Michigan. CSEM is comprised of a tightly integrated group of faculty, students and staff from the Departments of Aerospace Engineering, Climate and Space Sciences and Engineering, and Electrical Engineering and Computer Science.

The overall goal of CSEM is to develop high-performance, first-principles based computational models to describe and predict hazardous conditions in the near-earth space environment extending from the sun to the ionosphere, called space weather. In order to achieve predictive capability, the models must run considerably faster than real time on mid-size parallel computers.

CSEM is located in the College of Engineering of the University of Michigan. It is administered through the Space Physics Research Laboratory of the Department of Climate and Space Sciences and Engineering.

The Director of CSEM is Professor Tamas Gombosi of the Department of Climate and Space Sciences and Engineering. He is assisted by co-directors Kenneth Powell of the Department of Aerospace Engineering and Quentin Stout of the Department of Electrical Engineering and Computer Science.

In the late 1980s there were three independent research directions that formed the foundations of the Center for Space Environment Modeling (CSEM). In the College of Engineering there was an interdepartmental effort to utilize parallel computer architectures for scientific computing. One of the leaders of this effort was Quentin Stout of the Electrical Engineering and Computer Science (EECS) department who was mainly interested in broadening the use of parallel computers. In 1986 the Aerospace Engineering (AE) department hired Bram van Leer, one of the founding fathers of modern computational fluid dynamics (CFD), who made seminal contributions to the development of high-resolution finite volume methods that combine the use of high-order reconstruction with limiters and Riemann solvers. These methods turned out to be particularly robust and accurate for high-speed aerodynamics. The third research effort was carried out by Tamas Gombosi who was interested in numerical simulations of the terrestrial polar wind (high speed plasma outflow from the high-latitude ionosphere) and cometary atmospheres where neutral gas, dust and plasma flows form a highly dynamic, complex coma. Dr. Gombosi used first-order Godunov schemes to model the high-speed complex flows in the ionosphere and in the vicinity of comets. These Godunov schemes form the basis of the modern high-resolution methods Dr. van Leer pioneered.

Gombosi and van Leer met at a party sometimes in 1987 and they started a polite conversation about their research interests. When Gombosi mentioned that he was using Godunov schemes to solve complex transonic flow problems, van Leer smiled and simply stated: “I am the greatest expert in Godunov schemes.” This is how the collaboration started between the two of them. This collaboration significantly expanded by the recruitment of Philip Roe (1990) and the inclusion of Kenneth Powell. Prof. Roe is one of the most original thinkers in CFD who – among other things – invented the so-called “Roe-solver,” an innovative Riemann solver that minimizes dissipation. Dr. Powell joined the Aerospace department in 1987 right after graduating from MIT. His particular research area was adaptive mesh refinement (AMR), a major ingredient of modern large-scale simulation methods.

The first joint project between Gombosi and van Leer was the implementation of higher-order numerical methods in dusty gas flow simulations and subsequently, in simulations of the polar wind. The collaboration significantly broadened around 1990, when Roe, Powell and Stout joined the group. There were two major developments at this early that determined the further direction of the collaboration.

  1. Prof. Powell’s research area was the application of adaptive grids to high-speed aerodynamic problems. He applied this new technology to comet modeling and realized the potential power of AMR in space physics simulations.
  2. Powell realized that adding source terms proportional to div B made the multi-dimensional MHD schemes much more robust, and this approach became known as the "8-wave" method. The same idea was already put forward by Godunov in 1972 based on theoretical grounds.

Encouraged by the initial success of the collaboration, in 1992 the team submitted proposals to the high-performance computing and communications (HPCC) initiatives at NASA and NSF. The abstract of the proposals were the following:

“An interdisciplinary effort is proposed to use the power of massively parallel computers to develop the first multi-scale model of the heliosphere-magnetosphere-ionosphere system. This model will represent a dramatic step forward in Geospace Environment Modeling (GEM). The main features of the proposed effort are:

  • Generalized magnetohydrodynamic (MHD) equations (which use higher order velocity moments of the Boltzmann equation) will be solved on a 3D adaptively-refined unstructured mesh for the first time. Microscopic plasma processes, which cannot be described by moment equations, will be studied by kinetic and stochastic methods.
  • Finite volume Godunov-type schemes will be used because they are robust and accurate, minimize numerical dissipation and dispersion, and faithfully represent shocks and other discontinuities. In addition, boundary conditions can be applied in a systematic and physically appealing manner. Resolution will be increased substantially, by employing adaptive meshes.
  • The development of 3D adaptive-grid methodology, including an effective strategy for load balancing on parallel computers, will have a major impact on the solution of problems of continuous media. Examples arise in direct turbulence simulation, non-linear acoustics, elastic waves, electromagnetics, automotive design, power generation, oil recovery, and environmental modeling.
  • The new model will be developed for a variety of parallel architectures with special attention to visualization, scalability and to optimizing the codes for MIMD machines.”

In 1992 this idea was extremely ambitious and it was completely trashed by the reviewers. Here are some of the most salient reviews:

  • “…I deem that what they are proposing is a nearly intractable problem. MFD with variable diffusion terms, particularly in 3D, is not easy to deal with; MHD with PPM is non-trivial as a development exercise; and, MHD on unstructured meshes ON A PARALLEL SYSTEM is probably not possible.”
  • “[They] try to cover everything, no outstanding innovation in any part.”
  • “Record of accomplishments marginal.”
  • “…what they propose is unreasonable.”

The proposals were rejected.

There was a second HPCC competition announced by NSF in 1993. Undeterred by the negative reaction to our first attempt, the proposal was revised and resubmitted. By this time we were showing preliminary simulation results at meetings and the proposal received a much better set of reviews than the first one. As it turned out, our proposal was the highest rated non-funded proposal. Fortunately, however, the NSF Program Director for magnetospheric physics, Dr. Tim Eastman, recognized the potential of the proposed model and he solicited support from the NASA HPCC and the Air Force’s space science programs. In early 1994 we received an interagency seed grant jointly funded by the NSF Magnetosphere Program, NASA’s HPCC program and the AFOSR Space Science Program. This enabled us to hire the first full time scientist for the new AMR MHD project. He was Dr Darren De Zeeuw, a fresh graduate of the Aerospace Engineering Department at Michigan. His thesis was the development of an unstructured quadtree 2D AMR grid for the Euler equations. His thesis committee was chaired by Prof. Powell, and the members of the committee were Profs. Gombosi, Roe and van Leer.

In less than a year we published the first paper with the application of our newly developed 2D AMR MHD code to cometary plasmas. Predictably, the reaction to the emergence of a new global MHD simulation group was mixed. The competition was highly critical of the new numerical approach and they predicted that this method would never work in 3D, and particularly not for strongly magnetized objects (like the Earth or the Sun). Needless to say, they turned out to be wrong. The first 3D cometary plasma simulation was published in 1996, shortly followed by the first simulations of the terrestrial magnetosphere.

The first 3D MHD code was still written for serial computers. It used an unstructured octree AMR grid with cut cells to account for arbitrary central bodies. It represented a great step forward from the present state-of-the-art, but it also had limitations. The main limitation was that the data structure was not appropriate for massively parallel architectures. However, our global AMR MHD technology was ready for the next step. This came with the second NASA HPCC competition in 1996.

In 1995 the NASA HPCC program issued a solicitation for Round 2 proposals. The goal of this round was to create the first generation of physics models for parallel computers. The solicitation explicitly stated that the codes must meet annual performance goals of 10 GFlops, 50 GFlops and 100 GFlops on NASA provided machines. While writing our proposal we made several strategic decisions that pretty much determined to direction of the Michigan code development effort for a long time:

  1. For high parallel performance we decided to abandon the old cell-based octree structure, because it was not suitable for efficient parallelization. The proposed new data structure was a block-adaptive tree, that utilized many of the features of the cell-based octree structure but was relative “easy” to parallize. In this approach the governing equations are integrated to obtain volume-averaged solution quantities within computational cells. The computational cells are embedded in regular structured blocks of equal sized cells. The blocks are all self-similar and consist of Nx × Ny × Nz cells (typically, blocks may have 10 × 10 × 10 = 1000 cells). In this way, the solution data associated with each block can be stored in i,j,k-indexed arrays --- a data structure that is conducive to good computational performance, for it permits even strides through memory and helps to avoid the use of indirect addressing. Although each block occupies the same amount of space in memory, the blocks may occupy different sized volumes in physical space. The computational grids of the AMR solver are then taken to be composed of many self-similar blocks. The mesh is adapted to the solution by dividing and coarsening the appropriate blocks. In regions that require increased cell resolution, a block is refined by dividing it into eight identical octants. Each of the octants becomes a block with the same number of cells as the original. In regions that are deemed over-resolved, the refinement process is reversed and eight blocks are coarsened and coalesced into a single block. Multiple physics-based refinement criteria are used to direct the coarsening and division of blocks, with changes in cell resolution by only a factor of two permitted between adjacent blocks. A hierarchical tree-like data structure with multiple “roots,” multiple “trees,” and additional interconnects between the “leaves” of the trees is used to keep track of mesh refinement and the connectivity between solution blocks.
  2. For most computational models that involve the solution of PDEs, domain decomposition is a natural and, in many cases, the most practical approach to parallelization. However, serial codes that were originally designed for implementation on single-processor computers, can have inherent limits on the scalability that can be achieved using domain decomposition. In many instances, solver speed-up is achieved for 16, 32, or 64 processors; however, with added processors, not only does the method fail to scale as expected, but the performance of the algorithm may actually diminish with an increase in the number of processors. In order to avoid many of the limitations described above, we proposed a parallel block-based AMR solver that was designed from the ground up with a view to achieving very high performance on massively parallel architectures. The underlying upwind finite-volume solution algorithm has a very compact stencil and is therefore highly local in nature. This results in low inter-processor communication overhead. It also permits the more efficient use of memory and cache. The hierarchical data structure and self-similar blocks make domain decomposition of the problem almost trivial and readily enable good load-balancing, a crucial element for truly scalable computing. A natural load balancing is accomplished by simply distributing the blocks equally amongst the processors and for 10 blocks per processor the load imbalance is less that 10% (the load imbalance is less than 1% for 100 blocks per node). The self-similar nature of the solution blocks also means that serial performance enhancements apply to all blocks and that fine grain parallelization of the algorithm is possible.
  3. The parallel implementation of the algorithm was proposed to be carried out to such an extent, that even the grid adaptation is performed in parallel. Other features of the parallel implementation include the use of FORTRAN 90 as the programming language and the message passing interface (MPI) library for performing the message passing. Use of these standards greatly enhances the portability of the code and led to very good serial and parallel performance. The message passing is performed in an asynchronous fashion with gathered wait states and message consolidation such that it typically accounts for less than 3-5% of processor time. The serial performance of the algorithm, and hence the overall parallel performance of the method, has also been greatly enhanced.

The Michigan proposal was highly ranked by the review panel, but there was strong internal opposition at NASA’s Goddard Space Flight Center that was responsible for the management of the HPCC project. Here are some examples of the internal comments:

  • “We would do them a favor by declining the proposal, so that they do not get embarrassed.”
  • “I do not believe they can do this.”

In spite of the opposition our proposal was selected for funding. The resulting code was called “Block-Adative Tree Solar-wind Roe-type Upwind Scheme” or BATS-R-US. The project was a spectacular success. By the end of the first year the code achieved 13 GFlops on the 512 processor Cray T3D machine purchased by the NASA HPCC project. By the end of the three-year funding period BATS-R-US performed 345 GFlops on a 512 processor Cray T3E computer. BATS-R-US was a success and it was here to stay.

In 2000 two major funding opportunities were announced by he Department of Defense (DoD) and by the NASA HPCC program. DoD’s Multidisciplinary University Research Initiative (MURI) program called for proposals “to develop predictive models of explosive solar activity and its effect on the near-Earth space environment.” At this time the MURI program provided 5 million dollars over five years, so this was a great opportunity to expand the applications of BATS-R-US from comets and the terrestrial magnetosphere to the heliosphere and the Sun.

The NASA HPCC program called for proposals to simultaneously increase interoperability and performance of grand challenge applications through the use of common software interfaces by broad communities of Earth, space, life, and microgravity science researchers. The objective of getting their codes to interoperate in common frameworks while simultaneously achieving high performance was viewed as a step to the next generation of production-ready software for many scientific communities. Specifically, the call for proposals asked for the development of high-performance scalable parallel codes from Grand Challenge Investigations that can achieve code interoperation and high performance simultaneously.

In response to these solicitations we proposed the development of a Space Weather Modeling Framework (SWMF) and basic physics domain models extending from the Sun to the Earth. The NASA HPCC proposal focused on the framework, while the DoD MURI proposal focused on the development of high-performance physics models. The proposed SWMF was modular with individual spatial regions and physical processes described by exchangeable interoperating modules. In this approach each adaptive physics domain has its own grid structure, set of dependent variables and set of governing conservation laws. The adaptive physics domains can overlap each other, or they can interface through a boundary surface.

We were delighted when both our MURI and NASA HPCC proposals were selected for funding. This gave us the opportunity to develop the SWMF framework and its basic physics models simultaneously. By 2006 the main elements of SWMF and a highly improved BATS-R-US were in place.

SWMF was designed in 2001 and had been developed to integrate and couple several independently developed space physics codes into a flexible and efficient model. Figure 1 shows the schematics of the SWMF and its components. The main design goals of the SWMF were to minimize changes in the original models, provide a general, flexible and efficient method to execute the coupled models on massively parallel distributed memory machines, and to allow adding new components and new physics models for existing components with ease. The SWMF was written in Fortran 90 and Perl and has been designed in an object-oriented manner from the beginning. Data naming conventions and the use of CVS were adopted from the beginning. Rigorous testing procedures have been applied from the beginning as well. Most of the implemented modules contain unit tests. The components are tested individually, and the functionality of the whole SWMF is tested by system tests exercising multiple coupled components. The SWMF and its components pass this comprehensive hierarchical test suite every night on several computer and compiler platforms. The SWMF source code is well documented. The reference manual is generated from the source code using Perl scripts. All input parameters are documented in XML files, which are converted into the user manual. We have also developed a graphical user interface (SWMF GUI) to improve the usability of this complex software.

We are regularly extending the SWMF with new components and new component versions. It takes typically two weeks of work from two developers to add a new component to the SWMF. Most of this time is spent on adopting the physics model to the SWMF and on writing and testing the couplers between the components. New components that do not use Cartesian grids can take longer if they require high-performance grid rendezvous algorithms. The SWMF currently covers 10 physics domains, many of them represented with multiple models. The SWMF is capable of simulating the Sun-Earth system from the solar corona to the upper atmosphere of the Earth faster than real time on hundreds of processors.


The following were references used in the CSEM History above.

  1. K. G. Powell, A Riemann Solver for Ideal MHD That Works in More Than One Dimension, ICASE Report 94-24, 1994.
  2. T.I. Gombosi, K.G. Powell and D.L. De Zeeuw, Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results, J. Geophys. Res., 99, 21,525-21,539, 1994.
  3. T.I. Gombosi, D.L. De Zeeuw, R. Häberli, and K.G. Powell, A 3D multiscale MHD model of cometary plasma environments, J. Geophys. Res., 101, 15,233-15,253, 1996.
  4. T.I. Gombosi, D.L. De Zeeuw, C.P.T. Groth, K.G. Powell, and P. Song, The length of the magnetotail for northward IMF: Results of 3D MHD simulations, Phys. Space Plasmas (1998), 15, 121-128, 1998.
  5. K.G. Powell, P.L. Roe, T.J. Linde, T.I. Gombosi, and D.L. De Zeeuw, A Solution-Adaptive Upwind Scheme for Ideal Magnetohydrodynamics, J. Computational Phys., 154, 284-309, 1999.