the goal is to make it faster on computers with many cores: The
easiest way to do this is to ensure that the longest running tests are
not taking too much time and that they need about the same time. Thus
this patch contains the following changes which limits the CPU time
taken by each test to about two minutes in debug mode on my machine:
- the water-air problem using the non-isothermal primary variable
switching model now uses an 16x16 instead of a 32x32 grid. as a
compensation it now runs for a year instead of 5000 seconds and the
global grid refinement is now tested.
- the end time of the lens problem ctests is now 3000 instead of
30000 seconds. The binary itself does not change at all.
- sort the tests in the CMakeLists.txt roughly in the order of their
required time. (this will cause ctest not having to wait for long
running test which were started late for too long.)
it was used for debugging intersection mappers. to make this include
work, dune-cornerpoint must be available and that the intersection
mapper PR must be merged.
this means that all code which could potentially throw an exception is
moved to this method(). (In particular FluidSystem::init() proved
troublesome in the past.) Besides avoiding segmentation the faults
which stem from exceptions thrown in constructors, this also has the
advantage that simulations which spend a noticable amount of time to
initialize stop at the "correct" place, i.e. after the "Finish init of
the problem" message was printed by the simulator...
this regressed after time step index of the initial solution was
changed from 0 (actually, this was also 0 for the first time step...)
to -1 in b30af664.
this method checks that the difference in the storage terms before and
after a time step is the same as the accumulated fluxes over the
domain boundary plus the source terms.
this is required so that the element-centered finite volume method
does not handle each partition of the domain separately. (i.e. so that
fluxes accross faces on the process boundaries are considered) mea culpa!
The fix for this is to also include these entries in the matrix which
uses domestic indices. This required some rather extensive changes to
the blacklisting mechanism as for this it must be possible to
translate the index of a blacklisted entity (i.e., an entity in a
ghost or an overlap cell) to a domestic index (i.e., the corresponding
index in the algebraic overlap).
Note that the code for algebraic overlaps is *fun* and the person who
wrote it should be tarred and feathered. (*ouch* ;)) Seriously: Better
approaches than "lets-throw-this-away-and-use-grid-overlaps" are
deeply appreciated. (The grid overlap is not really useful in Dune
because only "Mickey Mouse grids" like Dune::YaspGrid support it.)
... and actually use it for the lens problems. This seems to have been
disabled for debugging and later it was probably forgotten to turn it
on again. This led to some minor bit-rot in that code...
The reason for this is to be able to modify the tolerance according to
grid size: The NewtonTolerance parameter has been renamed to
NewtonRawTolerance and for the porous media models is divided by the
square root of the volume of the smallst finite volume in the grid to
get the final tolerance for the Newton method. This is necessary
because very large grids need to achive a higher volumetric accuracy
in the residual than very small ones...
basically the init() method was split into a finishInit() method which
fills the data structures allocated in the constructor with meaningful
data and into applyInitialSolution() which does just that (and no
more!)
"intensive" means that the value of these quantities at a given
spatial location does not depend on any value of the neighboring
intensive quantities. In contrast, "extensive" quantities depend in
the intensive quantities of the environment of the spatial location.
this change is necessary is because the previous nomenclature was very
specific to finite volume discretizations, but the models themselves
were already rather generic. (i.e., "volume variables" are the
intensive quantities of finite volume methods and "flux variables"
are the extensive ones.)
For some reason, it changed because of the transition to the primary
variable switching approach. Because I trust the new result more than
the old, let's make this the new reference solution.