Forced isotropic turbulence in a triply-periodic box

We compute the evolution of forced isotropic turbulence (see Rosales & Meneveau, 2005) and compare the solution to that of the hit3d pseudo-spectral code. The initial condition is an unstable solution to the incompressible Euler equations. Numerical noise in the solution eventually leads to the destabilisation of the base solution into a fully turbulent flow where turbulent dissipation balances the linear input of energy.

#include "grid/multigrid3D.h"
#include "navier-stokes/centered.h"

We use the λ2 criterion and Basilisk View for visualisation of vortices.

#include "lambda2.h"
#include "view.h"

We monitor performance statistics and control the maximum runtime.

#include "navier-stokes/perfs.h"
#include "maxruntime.h"

#define MU 0.01

We need to store the variable forcing term.

face vector av[];

The code takes the level of refinement as optional command-line argument (as well as an optional maximum runtime).

int maxlevel = 7;

int main (int argc, char * argv[]) {
  maxruntime (&argc, argv);
  if (argc > 1)
    maxlevel = atoi(argv[1]);

The domain is (2π)3 and triply-periodic.

  L0 = 2.*π;
    periodic (right);

The viscosity is constant. The acceleration is defined below. The level of refinement is maxlevel.

  const face vector muc[] = {MU,MU,MU};
  μ = muc;
  a = av;
  N = 1 << maxlevel;

Initial conditions

The initial condition is “ABC” flow. This is a laminar base flow that is easy to implement in both Basilisk and a spectral code.

event init (i = 0) {
  if (!restore (file = "restart"))
    foreach() {
      u.x[] = cos(y) + sin(z);
      u.y[] = sin(x) + cos(z);
      u.z[] = cos(x) + sin(y);

Linear forcing term

We compute the average velocity and add the corresponding linear forcing term.

event acceleration (i++) {
  coord ubar;
  foreach_dimension() {
    stats s = statsf(u.x);
    ubar.x = s.sum/s.volume;
    av.x[] += 0.1*((u.x[] + u.x[-1])/2. - ubar.x);


We log the evolution of viscous dissipation, kinetic energy, and microscale Reynolds number.

event logfile (i++; t <= 300) {
  coord ubar;
  foreach_dimension() {
    stats s = statsf(u.x);
    ubar.x = s.sum/s.volume;
  double ke = 0., vd = 0., vol = 0.;
  foreach(reduction(+:ke) reduction(+:vd) reduction(+:vol)) {
    vol += dv();
    foreach_dimension() {
      // mean fluctuating kinetic energy
      ke += dv()*sq(u.x[] - ubar.x);
      // viscous dissipation
      vd += dv()*(sq(u.x[1] - u.x[-1]) +
		  sq(u.x[0,1] - u.x[0,-1]) +
		  sq(u.x[0,0,1] - u.x[0,0,-1]))/sq(2.*Δ);
  ke /= 2.*vol;
  vd *= MU/vol;

  if (i == 0)
    fprintf (ferr, "t dissipation energy Reynolds\n");
  fprintf (ferr, "%g %g %g %g\n",
	   t, vd, ke, 2./3.*ke/MU*sqrt(15.*MU/vd));

We generate a movie of the vortices.

Animation of the λ2 isosurface (a way to characterise vortices) and cross-sections of velocity and vorticity.

event movie (t += 0.25; t <= 150)
  view (fov = 44, camera = "iso", ty = .2,
	width = 600, height = 600, bg = {1,1,1}, samples = 4);
  squares ("u.y", linear = true);
  squares ("u.x", linear = true, n = {1,0,0});
  scalar ω[];
  vorticity (u, ω);
  squares ("omega", linear = true, n = {0,1,0});
  scalar l2[];
  lambda2 (u, l2);
  isosurface ("l2", -1);
  save ("movie.mp4");

We can optionally try adaptivity.

#if TREE
event adapt (i++) {
  double uemax = 0.2*normf(u.x).avg;
  adapt_wavelet ((scalar *){u}, (double[]){uemax,uemax,uemax}, maxlevel);

Running with MPI on occigen

On the local machine

%local qcc -source -D_MPI=1 isotropic.c
%local scp _isotropic.c

On occigen (using 512 cores)

module purge
module load openmpi
module load intel
mpicc -Wall -O2 -std=c99 _isotropic.c -o isotropic \
        -I$HOME -L$HOME/gl -lglutils -lfb_osmesa -lGLU -lOSMesa -lm
sed 's/WALLTIME/10:00/g' | sbatch

Note that this assumes that the Basilisk gl libraries have been installed in $HOME/gl.

With the script

#SBATCH -J basilisk
#SBATCH --nodes=32
#SBATCH --constraint=HSW24
#SBATCH --ntasks-per-node=16
#SBATCH --threads-per-core=1
#SBATCH --output basilisk.output
#SBATCH --exclusive


module purge
module load openmpi
module load intel

srun --mpi=pmi2 -K1 --resv-ports -n $SLURM_NTASKS \
     ./isotropic -m WALLTIME $LEVEL \

Running with MPI on mesu

On the local machine

%local qcc -source -D_MPI=1 isotropic.c
%local scp _isotropic.c

On mesu (using 512 cores)

module load mpt
mpicc -Wall -O2 -std=c99 _isotropic.c -o isotropic -I$HOME -L$HOME/gl -lglutils \
        -lfb_osmesa -lGLU -lOSMesa -lm
sed 's/WALLTIME/10:00/g' | qsub

with the script

#PBS -l select=22:ncpus=24:mpiprocs=24
#PBS -l walltime=WALLTIME
#PBS -N isotropic
#PBS -j oe  
# load modules 
module load mpt
# change to the directory where program job_script_file is located 
# mpirun -np 64 !!!! does not work !!!!
mpiexec_mpt -n $NP ./isotropic -m WALLTIME 2>> log.$NP >> out.$NP


The two codes agree at early time, or until the solution transitions to a turbulent state. The statistics produced by the two codes agree well after transition to turbulence.

Evolution of kinetic energy

Evolution of kinetic energy

Evolution of microscale Reynolds number

Evolution of microscale Reynolds number

Evolution of dissipation

Evolution of dissipation

The computational speed is respectable (for a relatively small 1283 problem on 512 cores). Note that these were obtained when switching off movie outputs.

Computational speed in points.timesteps/sec/core

Computational speed in points.timesteps/sec/core

See also