Note that the syntax
~~~literatec
Point point = locate (...);
~~~
is deprecated and will be removed in the future. See [`foreach_point()`](#foreach_point) and [`foreach_region()`](#foreach_region) for (better) alternatives.

## More field iterators
We have already seen the basic field iterator `foreach()`. While many
numerical schemes can be implemented using only this iterator, more
efficient and elegant implementations can often be designed using
other iterators provided by Basilisk.
### `foreach_dimension()`
As an example, consider the code required to compute
$$g=\nabla a$$
Using a centered scheme, this could be written:
~~~literatec
scalar a[];
vector g[];
...
foreach() {
g.x[] = (a[1,0] - a[-1,0])/(2.*Delta);
g.y[] = (a[0,1] - a[0,-1])/(2.*Delta);
}
~~~
It is clear that computing `g.y` is almost identical to computing
`g.x`, all that is required is a permutation of the indices. Most
multi-dimensional schemes involve similar permutations. Writing these
permutations manually, as we did above, is error-prone and leads to
code duplication. While this may be acceptable for the simple example
above, this can become very heavy for more complicated schemes. To
avoid this Basilisk provides the `foreach_dimension()` iterator which
can be used like this:
~~~literatec
...
foreach()
foreach_dimension()
g.x[] = (a[1,0] - a[-1,0])/(2.*Delta);
~~~
At compilation, the code block associated with `foreach_dimension()`
(a single line in this example) will be automatically duplicated for
each dimension of the problem with the appropriate coordinate
permutations: `.x` will become `.y`, `.y` will become `.x` (or `.z` in
3D) etc... and the field indices will be rotated correspondingly.
Any variable or function identifier ending with `_x`, `_y` and `_z`
will also be rotated.
The `right`, `left`, `top`, `bottom`, `front` and `back` direction
indicators will also be rotated.
### `foreach_face()`
The `foreach()` operator can be seen as iterating over the *cells* of
the grid. When writing *flux-based* schemes it is often useful to
consider iterations over the *faces* of each cell (because fluxes are
by definition associated with cell faces).
Implementing an iteration over cell faces using only the `foreach()`
operator is not easy, even for a one-dimensional problem, due to
boundary conditions. Things also become more complex when considering
multi-dimensional and adaptive grids. The `foreach_face()` field
iterator solves these problems by providing a consistent traversal of
cell faces in all cases.
For example, computing the components of $\nabla a$ normal to each face
can be written:
~~~literatec
scalar a[];
face vector g[];
...
foreach_face()
g.x[] = (a[] - a[-1,0])/Delta;
~~~
Note that a face is defined by convention as separating a cell from
its *left* neighbour (hence `a[-1,0]` and not `a[1,0]`, see the [Face
and vertex fields section](Basilisk C#face-and-vertex-fields)
above). Note also that the duplication/permutation performed by
`foreach_dimension()` is implicitly included in `foreach_face()`.
In some cases, it can be necessary to apply different operations to
each component of a face vector field. For example let's assume we
need to initialise a face vector field with the components
`(y,x)`. This could be done using
~~~literatec
face vector u[];
...
foreach_face(x)
u.x[] = y;
foreach_face(y)
u.y[] = x;
~~~
Note that the coordinates `x` and `y` correspond to the center of the
face.
### `foreach_vertex()`
When initialising [vertex fields](Basilisk C#face-and-vertex-fields),
it is necessary to traverse each vertex of the grid. For example,
computing vorticity using the components of velocity defined on faces
could be written
~~~literatec
face vector u[];
vertex scalar omega[];
...
foreach_vertex()
omega[] = (u.y[] - u.y[-1,0] - u.x[] + u.x[0,-1])/Delta;
~~~
Consistently with the other iterators, the `x` and `y` declared
implicitly within `foreach_vertex()` loops are the coordinates of the
vertex.
### `foreach_boundary()`
It is sometimes useful to be able to traverse only the cells which are
touching a given boundary, for example to compute surface fluxes and
other diagnostics. While using a combination of `foreach()` and
conditions on cell coordinates would be possible, it would also be
inefficient and could be difficult when using adaptive meshes. For
this reason, Basilisk provides the `foreach_boundary()` iterator which
can be used like this for example:
~~~literatec
scalar a[];
...
double mu = 1., flux = 0.;
foreach_boundary (top)
flux += sq(Delta)*mu*(a[ghost] - a[])/Delta;
~~~
where *top* indicates which boundary we want to traverse (the keywords
are the same as used for [boundary conditions](#boundary-conditions))
and *ghost* is the index of the corresponding ghost cell: for this
example `a[ghost] == a[0,1]`.
This example computes the (diffusive) flux of `a` through the top
boundary. Note that for this to work in parallel you would also need
to add a [reduction clause](#parallel-programming) on `flux` in
foreach_boundary().
`foreach_boundary()` iterates over the cells which are
touching a given boundary but the coordinates `x,y,z` correspond not the center of the cells but the coordinates of a given boundary.
### `foreach_neighbor()`
This iterator is local to a cell, so it must be included within a
global field iterator (e.g. `foreach()` etc.). It will iterate over
all the cells belonging to the local stencil i.e. the 5x5 stencil by
default (in 2D). This can optionally be reduced to a 3x3 stencil
(i.e. only the nearest neighbors) using the syntax
`foreach_neighbor(1)`.
### `foreach_point()`
This iterator will only traverse the cell containing the "point" with the given coordinates. For example
~~~literatec
int main() {
init_grid (N);
scalar s[];
foreach_point (0.4, 0.6)
s[] = sqrt(sq(x) + sq(y));
}
~~~
is a valid (MPI) Basilisk program, where the scalar field `s` is only
initialized in a single cell.
### `foreach_region()`
This is similar to `foreach_point()` but will traverse the cells containing all the points sampled within a rectangular region. For example
~~~literatec
coord p;
coord box[2] = {{0, 0}, {1, 2}};
coord n = {10, 20};
foreach_region (p, box, n) {
...
}
~~~
will sample 10 x 20 points (with coordinates given by `p`) regularly distributed within [0:1] x [0:2].
## Event inheritance
In contrast with C functions, multiple events can share the same
name. This is used to implement
[inheritance](https://en.wikipedia.org/wiki/Inheritance_%28object-oriented_programming%29)
of existing events and allows to modify and extend the functionality of
existing solvers.
When two (or more) events have the same name, the order of execution
of this group is modified. The following rules are applied:
* all events in the group are executed "together",
* the first event in the group (in order of appearance in the source
files), sets the point where the group is executed,
* within a group, the order of execution is the *inverse* of the order
of appearance in the source files.
Although this may seem complicated, these rules work well in practice
and lead to a reasonably simple inheritance mechanism.
A trace of the order of execution of events can be obtained using the
compilation option:
~~~bash
% qcc -events ...
~~~
The resulting code will then output (on standard error) a trace looking like:
~~~bash
...
events (i = 4, t = 0.00191154)
logfile rising.c:131
set_dtmax src/navier-stokes/centered.h:163
stability src/tension.h:59
stability src/vof.h:44
stability src/navier-stokes/centered.h:165
vof src/vof.h:174
vof src/navier-stokes/centered.h:175
tracer_advection src/navier-stokes/centered.h:176
...
~~~
There are two event groups in this example: *stability* and
*vof*. They correspond to extensions of the initial solver
[/src/navier-stokes/centered.h](): adding the stability conditions for
surface tension (at line 59 of [/src/tension.h]()) and for VOF (at
line 44 of [/src/vof.h]()) and plugging in VOF advection (at line 174
of [/src/vof.h]()) at the correct location (line 175 of
[/src/navier-stokes/centered.h]()).
## Grid allocation and deallocation
Hierarchy of grids
### Multigrids
### Adaptive grids
## Metric
## Diagonalization
The `diagonalize(s)` operator replaces the central stencil value of
the scalar field `s` (i.e. `s[]`) with one and its other values with
zero, in all operations within the following C statement. This is
typically used to automatically construct Jacobi relaxation operators
for multigrid solutions of linear systems.
## Einstein summation
The `einstein_sum()` operator can be used to perform automatic [Einstein summation](https://en.wikipedia.org/wiki/Einstein_notation) on the components of tensors and vectors. See the documentation in [einstein_sum.h](/src/ast/einstein_sum.h).
## Dimensional Analysis
The Basilisk C preprocessor [qcc](/src/qcc.c) performs sophisticated [Dimensional Analysis](https://en.wikipedia.org/wiki/Dimensional_analysis). Within this framework, the dimensions of numerical constants are given using "standard" C array indexing. For example the following syntax:
~~~literatec
double g = 9.81 [1,-2];
~~~
indicates that the constant `9.81` has dimension `[1,-2]` (typically the dimension of an acceleration).
Other useful special functions include `show_dimension()` and `dimensional()`.
Note that these extensions are only used symbolically during preprocessing and do not affect in any way the actual compilation or execution of the program.
See also the [Dimensional Analysis Tutorial](/Tutorial.dimensions) for detailed explanations.
## Named/optional arguments in function calls
C++ allows named and optional arguments in function calls, for example:
~~~c
void example (float f = 0, double d = 0);
...
example (f = 2); // d = 0
example (d = 4, f = 1.2);
~~~
Unfortunately this is not allowed in C99. Basilisk C includes an
extension which provides this functionality.
## Parallel programming
Basilisk can automatically parallelise field iterations
(i.e. `foreach()` etc...) on systems supporting
[OpenMP](https://en.wikipedia.org/wiki/OpenMP) or
[MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). If the
*write* operations performed on stencils are purely local
(i.e. concurrent accesses by several threads are only possible for
*read* operations), then nothing special needs to be done to
parallelise the corresponding field iteration.
A classical example where this condition is not verified are
*reduction operations*. Consider for example the function:
~~~literatec
double maximum (scalar a) {
double maxi = - HUGE;
foreach()
if (fabs(a[]) > maxi)
maxi = fabs(a[]);
return maxi;
}
~~~
In this example, if the `foreach()` iteration is performed in
parallel, several threads may try to set the `maxi` variable
simultaneously (this is a concurrent *write* access). If this happens
the result returned by the function will be incorrect.
Similarly, when using MPI parallelism, each process will only hold a local
`maxi` value, whereas a global value is required.
To cover this common pattern, OpenMP provides [*reduction
operators*](https://www.openmp.org/spec-html/5.0/openmpsu107.html)
which can be used in Basilisk like this:
~~~literatec
double maximum (scalar a) {
double maxi = - 1e100;
foreach (reduction(max:maxi))
if (fabs(a[]) > maxi)
maxi = fabs(a[]);
return maxi;
}
~~~
It is also possible to reduce an array by indicating its length. For example, one could count the number of leafs cells at each level by writing:
~~~literatec
int len = grid->maxdepth + 1, cells[len] = {0};
foreach (reduction(+:cells[:len]))
cells[level]++;
~~~
Note that Basilisk generalises reduction operators so that they work
also with MPI parallelism.
Beyond reduction operations, care must be taken to avoid concurrent
write accesses when writing Basilik code. Consider the following code
which makes use of an auxilliary variable to store an intermediate
result:
~~~literatec
double norm;
foreach() {
norm = sqrt(sq(u.x[]) + sq(u.y[]));
a[] = norm;
b[] = norm/2.;
}
~~~
The code will work fine in serial (or with MPI), however it will fail
when using shared-memory parallelism (i.e. OpenMP), because `norm`
will be shared amongst threads and written to concurrently. The
solution is simple: just make `norm` a variable local to the loop body
i.e.
~~~literatec
foreach() {
double norm = sqrt(sq(u.x[]) + sq(u.y[]));
a[] = norm;
b[] = norm/2.;
}
~~~
Each thread then has its own private `norm` variable and concurrent
accesses are avoided. Note that this is also a cleaner/clearer way of
writing code, even for the serial version. Generally speaking,
variables should be declared/allocated as closely as possible to where
they are used.
By default Basilisk will parallelise all iterators. This can be a
problem, for example when writing data to a file. Serial iterations
for individual loops can be forced like this:
~~~literatec
foreach (serial)
printf ("%g %g %g\n", x, y, a[]);
~~~
## Tracing and profiling
The `trace` keyword is used to indicate that the function immediately
following should be instrumented for [built-in
profiling](src/README.trace) or [profiling with
Paraver](src/README.paraver). For example one could write
~~~literatec
trace
double maximum (scalar a) {
...
~~~