src/Tips

Parallel runs with MPI

To compile with MPI parallelism enabled, you need to use something like:

CC99='mpicc -std=c99' qcc -Wall -O2 -D_MPI=1 example.c -o example -lm

where mpicc calls the MPI compiler on your system. The resulting executable can then be run in parallel using something like

mpirun -np 8 ./example

The details may vary according to how the MPI compiler is setup on your system.

Using Makefiles

The “manual” way above is automated if you use the standard Makefiles provided by Basilisk. You can then compile and run the example above on eight processes using:

CC='mpicc -D_MPI=8' make example.tst

This assumes that mpicc and mpirun are available on your system.

Running on supercomputers

A simple way to run Basilisk code on a supercomputer is to first generate a portable (ISO C99) source code on a machine where qcc is installed i.e.

%localmachine: qcc -source -D_MPI=1 example.c

Then copy the portable source code _example.c (don’t forget the underscore!) on the supercomputer and compile it:

%localmachine: scp _example.c login@supercomputer.org:
%localmachine: ssh login@supercomputer.org
%supercomputer.org: mpicc -Wall -std=c99 -O2 -D_MPI=1 _example.c -o example -lm

where the -std=c99 option sets the version of the language to C99. Note that this option may change depending on the compiler (the options shown above are valid for gcc or icc, the Intel compiler).

You will then need to use the job submission system of the supercomputer to set the number of processes and run the executable. See also the following examples:

Non-cubic domains

For the moment, the only way to combine non-cubic domains and MPI parallelism implies using multigrids rather than tree grids (because masking does not work together with MPI yet). This also means that MPI-parallel, non-cubic and adaptive simulations are not possible yet.

On multigrids, MPI subdomains are setup using Cartesian topologies i.e. the processes are arranged on a line (in 1D), a rectangle (in 2D) or a cuboid (in 3D). The total number of processes np thus verifies the relation np=nxnynz where nx, ny, nz are the number of processes along each axis. Controlling the number of processes along each axis allows to change the aspect ratio of the (global) domain.

If nx, ny and nz are not specified by the user, Basilisk sets them automatically based on the value of np (as set by the mpirun -np command). To do so, it calls the MPI_Dims_create() MPI function. Some particular cases are:

  • np is a square number (in 2D) or cubic number (in 3D): In this case nx=ny=nz=np1/3 i.e. the domain is a cube.
  • np is a prime number: nx=np, ny=nz=1 i.e. the domain is a long channel.
  • np is the product of two prime numbers: a 3D decomposition is not possible, an MPI error is returned.

To explicitly control the dimensions along each axis, one can call the dimensions() function. Any dimension which is not set is computed automatically.

All this is also compatible with global periodic boundaries (but not periodic boundaries for individual fields).

Physical dimensions and spatial resolution

As usual, the physical dimension of the domain is set by calling the size() function. What is set is the size along the x-axis of the entire domain (not of the individual subdomains). This allow one to vary the number of processes while keeping the physical size constant (i.e. do the same simulation on a different number of processes).

The spatial resolution (N) is handled in a similar way, with the added constraint that it must be a multiple of nx×2n with n an integer.

Examples

A [0:1]×[0:1/3]×[0:1/3] channel, periodic along the x-axis (with slip walls), on 3 MPI processes, with 96 points along the x-axis:

#include grid/multigrid3D.h
...
int main() {
  periodic (right);
  init_grid (128);
  ...
}

The same simulation but on 24 processes:

#include grid/multigrid3D.h
...
int main() {
  dimensions (nx = 6);
  periodic (right);
  init_grid (128);
  ...
}

A cube centered on the origin, of size 1.5, all-periodic, with 512 points along each axis, on 8 = 23 or 64 = 43 or 512 = 83 or 4096 = 163 etc… processes:

#include grid/multigrid3D.h
...
int main() {
  size (1.5);
  origin (-0.75, -0.75, -0.75);
  foreach_dimension()
    periodic (right);
  init_grid (512);
  ...
}