it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
this broke with 94006531. I actually fixed the reservoir problem
yesterday before pushing 94006531 but forgot to include the fix in my
local branch before pushing. Stupid me!
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
On my system I got
```c++
error: variable ‘std::ofstream file’ has initializer but incomplete type
std::ofstream file(fname.str().c_str());
```
This is fixed with this commit by including fstream. Previously, this include might have happened implicitely.
On my system I got
```c++
error: variable ‘std::ofstream file’ has initializer but incomplete type
std::ofstream file(fname.str().c_str());
```
This is fixed with this commit by including fstream. Previously,
this include might have happened implicitely.
now, the dune-alugrid module is required if these tests are to be
run. (note that due to the fact that the OPM build system has not been
detecting the legacy alugrid library for a while, the practical
implications of this patch should be small to non-existant.)
this is necessary to allow non-trivial ParameterCache objects with
Local-AD evaluations. So far, the only fluid system in opm-material
which needs this is the Spe5 fluid system (which is unused by eWoms),
but sooner or later this change would have been required anyway.
Note that it is possible that this patch is errornous if Evaluation !=
Scalar for a fluid system that uses a non-trivial ParameterCache
object, but the errors should be relatively easy to fix...
Have removed the SimulatorState base class, and instead replaced with
the SimulationDatacontainer class from opm-common. The SimulatorState
objects were typcially created with a default constructor, and then
explicitly initialized with a SimulatorState::init() method. For the
SimulationDataContainer RAII is employed; the init( ) has been removed -
and there is no default constructor.
There were loops (over all timesteps) both in the
main() function and the simulator class.
Note:
This simulator cannot properly handle changing
well configurations, and will now use only the
initial configuration (first report step), instead
of possibly crashing later.
this basically means using Opm::EclipseState instead of the raw deck
for these keywords.
with this, property modifiers like ADD, MULT, COPY and friends are
supported for at least the PERM* keywords. If additional keywords are
required these can be added relatively easily as well.
no ctest regressions have been observed with this patch on my machine.
opm/core/utility/thresholdPressures.hpp
tests/test_thresholdpressure.cpp
opm/core/simulator/SimulatorCompressibleTwophase.hpp
opm/core/simulator/SimulatorCompressibleTwophase.cpp
opm/core/simulator/SimulatorIncompTwophase.hpp
opm/core/simulator/SimulatorIncompTwophase.cpp
examples/sim_2p_comp_reorder.cpp
the files in opm/core has been put in opm/simulators
i.e., the simulation for the CO2-injection problem which uses the
flash solver to handle its thermodynamics and element centered finite
volume method as the spatial discretization. The intention is to
ensure that opm-material's NcpFlash constraint solver works with
non-primitive types as Scalars. (or rather: that it will be quickly
detected if it breaks in that case.)
Have removed the SimulatorState base class, and instead replaced with
the SimulationDatacontainer class from opm-common. The SimulatorState
objects were typcially created with a default constructor, and then
explicitly initialized with a SimulatorState::init() method. For the
SimulationDataContainer RAII is employed; the init( ) has been removed -
and there is no default constructor.
Have removed the SimulatorState base class, and instead replaced with
the SimulationDatacontainer class from opm-common. The SimulatorState
objects were typcially created with a default constructor, and then
explicitly initialized with a SimulatorState::init() method. For the
SimulationDataContainer RAII is employed; the init( ) has been removed -
and there is no default constructor.
Have removed the SimulatorState base class, and instead replaced with
the SimulationDatacontainer class from opm-common. The SimulatorState
objects were typcially created with a default constructor, and then
explicitly initialized with a SimulatorState::init() method. For the
SimulationDataContainer RAII is employed; the init( ) has been removed -
and there is no default constructor.
the in-file lists of authors has been removed in favor of a global
list of authors in the LICENSE file. this is done because (a)
maintaining a list of authors at the beginning of a file is a major
pain in the a**, (b) the list of authors was not accurate in about 85%
of all cases where more than one person was involved and (c) this list
is not legally binding in any way (the copyright is at the person who
authored a given change, if these lists had any legal relevance, one
could "aquire" the copyright of the module by forking it and removing
the lists...)
the only exception of this is the eWoms fork of dune-istl's solvers.hh
file. This is beneficial because the authors of that file do not
appear in the global list. Further, carrying the fork of that file is
required because we would like to use a reasonable convergence
criterion for the linear solver. (the solvers from dune-istl do
neither support user-defined convergence criteria not do the
developers want support for it. (my patch was rejected a few years
ago.))
opm-parser#677 changes the return types for the Deck family of classes.
This patch fixes all broken code from that patch set.
https://github.com/OPM/opm-parser/pull/677
the changes enable the storage cache and the intensive quantity cache
for all simulators of the lens problem and automatic differentiation
for the one which uses the ECFV discretization.
while the performance improvements are not worthwhile for the problem
in its default incarnation (using automatic diffentiation even
slightly degrades performance), it speeds up linearization by about
30% if the grid exhibits 16 times as many elements (e.g. by passing
the --grid-global-refinements=2) parameter.
Several files stopped compiling due to relying on opm-parser headers
doing includes. From opm-parser PR-656
https://github.com/OPM/opm-parser/pull/656 this assumption is no longer
valid.
at least, they compile as far as eWoms is concerned. Some external
libraries (in particular everything which uses SuperLU) still have
issues.
Also, there seem to be issues with the precision that is achievable
by the Newton method when using float.
this is because the reference solution changes for newer versions of
dune-alugrid and one of the main purposes of the lens problem is to
allow comparison with Dumux relatively easily. (Dumux usese YaspGrid
for its version of the lens problem.)
- start with an initial "do nothing" episode of 100 days to get
hydrostatic conditions.
- after that, produce oil and inject water for 900 days. (thereafter
the reservoir will be empty.)
- make the problem work with element centered FV discretizations. this
requires to make the width of the injection/production areas at
least one cell wide. This is achieved by using the new "WellWidth"
property which specifies the with of wells as a factor of the total
domain width.
- make the problem work with fully compositional models. This implied
to calculate the full composition for the fluid states which specify
the initial condition and the thermodynamic state at the wells.
- add tests and reference solutions for any combination of the {ECFV,
VCFV} discretizations and the {black-oil, NCP} models.
- the residual now does not consider constraints anymore
- instead, the central place for constraints is the linearizer:
- it gets a constraintsMap() method which is analogous to residual()
but it stores (DOF index, constraints vector) pairs because
typically only very few DOFs need to be constraint.
- the newton method consults the linearizer's constraint map to update
the error and the current iterative solution. the primary variables
for constraint degrees of freedom are now directly copied from the
'Constraints' object to correctly handle pseudo primary variables.
- the abilility to specify partial constraints is removed, i.e., it is
no longer possible to constrain some equations/primary variables of
a degree of freedom without having to specify all of them. The
reason is that is AFAICS with partial constraint DOFs it is
impossible to specify the pseudo primary variables for models which
require them (PVS, black-oil).
because of this, the reference solution for the Navier-Stokes test
is updated. the test still oscillates like hell, but fixing this
would require to implement spatial discretizations that are either
better in general (e.g., DG methods) or adapted to Navier-Stokes
problems (e.g., staggered grid FV methods). since both of these are
currently quite low on my list of priorities, let's just accept the
osscillations for now.
This reduces the difference between flow and flow_mpi. For builds
without MPI, the fake helper from Dune is instantiated, which has
the same interface.
i.e. it now supports stuff like MULTFLT in the schedule
section. Possibly, the MPI-parallel code paths need some fixes. (but
if the geology is not changed during the simulation, the parallel code
will do the same as before.)
the most fundamental change of this patch is that the
reference/pointer to the DerivedGeology object is made
non-constant. IMO that's okay, though, becase the geology can no
longer assumed to be constant over the whole simulation run.
to be able to determine the threshold pressure from the initial
condition it needs to be able to access the initial condition, i.e.,
the initial simulator state, material properties object and gravity
constant need to be passed to the thresholdPressure() function.
to be able to determine the threshold pressure from the initial
condition it needs to be able to access the initial condition, i.e.,
the initial simulator state, material properties object and gravity
constant need to be passed to the thresholdPressure() function.
Since the refactoring to using opm-material a material law manager for
the global grid was used. This meant that the properties used for
elements of the local grid were wrong. With this commit we set up a
manager that is based on the local grid only.
1. Added new parameter group string: "solver_approach" which can take
the values {direct, cpr, interleaved}.
2. Hierarchy:
i If a value is set in the parameter group that takes absolute
presedence.
ii If the Eclipse input deck asks for CPR - you get CPR.
iii You get the flow default - currently interleaved.
Function Opm::thresholdPressures() gained a new ParseMode parameter
in commit OPM/opm-core@09aa2b7 (PR OPM/opm-core#857). Chase updated
call interface.
assumes:
- solvent is immiscible in the oil phase
- gas pvt and relperms are used for the solvent
- no initial solvent in the model
Solvent is injected using the WSOLVENT keyword
TODO: Make it possible to change WSOLVENT
* github.com:OPM/ewoms:
adaptation works, needs revision.
[dune-fem] using discrete function works.
some further work on grid adaptivity
dune.module: add dune-fem as a noptional dependency
Conflicts:
ewoms/common/start.hh
ewoms/io/basegridmanager.hh
ewoms/parallel/mpihelper.hh
this is not needed anymore because the grid manager is no longer a
singleton and the grid is thus is always destructed before
MPI_Finalize() is called.
these are mostly stylistic: the function bodies of most new methods
have been moved to the _impl.hpp file and the Simulator classes are
now templated on the grid type, so it should be not too hard to switch
them to Dune::CpGrid.
since SimulatorFullyImplicitCompressiblePolymer is now a template, the
opaque pointer stuff is also removed and the contents of the .cpp file
basically became _impl.hpp. for SimulatorFullyImplicitBlackoilPolymer,
the opaque pointer stuff was removed for the same reasons as for
SimulatorFullyImplicitBlackoil.
the actual unification of code is not yet done, but this patch points
out the direction of where this will go.
Also note that some synchronization with the ordinary blackoil
simulator (FLOW) was necessary to make it compile.
Finnally, the parser currently likes to throw an exception (also for
the opm-polymer master) when eating the opm-data polymer test
case. This prevented me from properly testing this patch:
```
and@heuristix:~/src/opm-polymer|simplify_simulator > ./bin/flow_polymer deck_filename=/home/and/src/opm-data/polymer_test_suit/simple2D/2D_THREEPHASE_POLY_HETER.DATA
================ Test program for fully implicit three-phase black-oil flow ===============
--------------- Reading parameters ---------------
deck_filename found at /, value is /home/and/src/opm-data/polymer_test_suit/simple2D/2D_THREEPHASE_POLY_HETER.DATA
output not found. Using default value 'true'.
output_dir not found. Using default value 'output'.
Program threw an exception: IOConfig: Reading GRIDFILE keyword from GRID section: Output of GRID file is not supported
terminate called after throwing an instance of 'std::runtime_error'
what(): IOConfig: Reading GRIDFILE keyword from GRID section: Output of GRID file is not supported
Aborted
```
i.e., removing redundant namespace open- and closings due to the fact
that the property system now resides in the 'Ewoms' namspace instead
of in 'Opm', and making the headercheck work for all headers.
for the Richards model we can't use the CO2 injection problem because
this problem cannot be simulated by the Richards model. (Well,
strictly speaking the Richards model *can* simulate it, but it would
only produce garbage because the assumptions of the Richards model are
violated by that problem.)
this works by introducing a splice called "LocalLinearizerSplice". The
the current local linearizer (which is based on the finite difference
method) is the default and can be set explicitly by setting the splice
to "FiniteDifferenceLocalLinearizer", the new linearizer using
automatic differentiation can be selected by setting the splice to
"AutoDiffLocalLinearizer".
As it turns out initializing the Geology on a distributed grid
result in wrong values for e.g. saturation. Therefore with this
commit we resort to initializing the global geology and distribute
it using communication.
Previously we used the size of the communicator within the CpGrid to check
whether we are running in parallel and need to redistribute the grid.
Unfortunately, this is MPI_COMM_SELF until we actually loadbalance and redistribute.
Therefore we now use the size of the MPI_Helper (i.e. MPI_COMM_WORLD) which
gives us the number of all available processes.
Note that the wrong behaviour was provoked with 656e5de331. Before that we
redistributed in any case which luckily included runs with more than 1 process.
Any argument that is not handled by the parameter parser will
be assumed to be a deck filename. Only one is accepted, and if
given, it will override any deck_filename=<something> on the
command line or in parameter files.
FYI:
The parameter parser handles arguments of the following types:
key=value (note no space around = or in strings)
parameterfile.xml
parameterfile.param
This makes some API changes to AutoDiffBlock.
- Add overload for the constant() constructor taking rvalue ref.
- Add overload for the variable() constructor taking rvalue ref.
- Make the function() constructor *require* rvalue refs.
- Add a swap() function.
The remaining changes in this commit are follow-ups especially
to the third change (adding std::move in many places), and
some removal of unnecessary block pattern arguments from calls to
the constant() static method.
now the generic part of the update of the solution vector is done in
the base class and the derived classes can chose to only do the update
of the primary variables of the individual DOFs.
With now generic implementation of the initStateEquil in opm-core
we added the necessary grid helper functionlality for CpGrid and activated
the processing if the EQUIL keyword is there.
Previously BlackoilPropsDataHandle did hold a grid for sending
and receiving that were either not used or we could prevent their
usage. Therefore this commit removes them from the class and queries
all needed information from the property objects.
1) swatinit() is changed to setSwatInitScaling() to make it obvious that
we are modifying the props.
2) the descriptions of saturation and pc now makes more sense
3) the method is removed from the sibling class and the interface and
the type of new_props is changed from BlackoilPropsAdInterface to
BlackoilPropsAdFromDeck
5) The same modification is added to sim_fibo_ad_cp
The capillary pressure function in new_props is scaled to match the
capillary pressure function in props.
This is a temporary workaround while the simulator uses two different
property object.
Previously, we had to use two layers of overlap cells such the
innermost layer contains the rightvalues automatically (as it is
surrounded by internal edges). No we use communication to get
the correct values in the whole overlap region and one layer
suffices as it should.
With this commit we add the possibility to start with a global representation
of a simulator that is read on each process and afterwards this presentation
is redistributed among the processors together with the properties and
state data needed to initialize the simulation.
There still is no parallel well handling and no parallel output. But with the
equilibrium example of @dr-robertk and deactivated output we can already
perform parallel runs.
As with opm-core we use boost::any to provide additional
information about a parallel run. It is used to set a
ParallelISTLInformation object and and fill it with the
information obtained from a parallel Cpgrid.
Note that the simulator currently compiles sucessfully. Still,
we have to test the runs and do debugging.
This reverts commit c6c271f3ee. After a
more thorough investigation, the cannonical name of these quantities
turned out to be "* formation volume factor"...
this also fixes the SuperLU backend with __float128 on Dune 2.4. The
problem is that due to some hacks within dune-istl, the AMG solver
can't be used because it calls the direct solver directly without an
option to disable this. (This could be fixed in a similar fashion as
the SuperLU backend by copying everything into data structures which
use 'double' before calling into ISTL, but this is a thing for another
time.)
... and use the parallel AMG solver for the CO2 injection problem.
this makes performance comparisions with Dumux much easier as the
solver performance should be more similar.