i.e. it now supports stuff like MULTFLT in the schedule
section. Possibly, the MPI-parallel code paths need some fixes. (but
if the geology is not changed during the simulation, the parallel code
will do the same as before.)
the most fundamental change of this patch is that the
reference/pointer to the DerivedGeology object is made
non-constant. IMO that's okay, though, becase the geology can no
longer assumed to be constant over the whole simulation run.
to be able to determine the threshold pressure from the initial
condition it needs to be able to access the initial condition, i.e.,
the initial simulator state, material properties object and gravity
constant need to be passed to the thresholdPressure() function.
to be able to determine the threshold pressure from the initial
condition it needs to be able to access the initial condition, i.e.,
the initial simulator state, material properties object and gravity
constant need to be passed to the thresholdPressure() function.
Since the refactoring to using opm-material a material law manager for
the global grid was used. This meant that the properties used for
elements of the local grid were wrong. With this commit we set up a
manager that is based on the local grid only.
1. Added new parameter group string: "solver_approach" which can take
the values {direct, cpr, interleaved}.
2. Hierarchy:
i If a value is set in the parameter group that takes absolute
presedence.
ii If the Eclipse input deck asks for CPR - you get CPR.
iii You get the flow default - currently interleaved.
Function Opm::thresholdPressures() gained a new ParseMode parameter
in commit OPM/opm-core@09aa2b7 (PR OPM/opm-core#857). Chase updated
call interface.
assumes:
- solvent is immiscible in the oil phase
- gas pvt and relperms are used for the solvent
- no initial solvent in the model
Solvent is injected using the WSOLVENT keyword
TODO: Make it possible to change WSOLVENT
these are mostly stylistic: the function bodies of most new methods
have been moved to the _impl.hpp file and the Simulator classes are
now templated on the grid type, so it should be not too hard to switch
them to Dune::CpGrid.
since SimulatorFullyImplicitCompressiblePolymer is now a template, the
opaque pointer stuff is also removed and the contents of the .cpp file
basically became _impl.hpp. for SimulatorFullyImplicitBlackoilPolymer,
the opaque pointer stuff was removed for the same reasons as for
SimulatorFullyImplicitBlackoil.
the actual unification of code is not yet done, but this patch points
out the direction of where this will go.
Also note that some synchronization with the ordinary blackoil
simulator (FLOW) was necessary to make it compile.
Finnally, the parser currently likes to throw an exception (also for
the opm-polymer master) when eating the opm-data polymer test
case. This prevented me from properly testing this patch:
```
and@heuristix:~/src/opm-polymer|simplify_simulator > ./bin/flow_polymer deck_filename=/home/and/src/opm-data/polymer_test_suit/simple2D/2D_THREEPHASE_POLY_HETER.DATA
================ Test program for fully implicit three-phase black-oil flow ===============
--------------- Reading parameters ---------------
deck_filename found at /, value is /home/and/src/opm-data/polymer_test_suit/simple2D/2D_THREEPHASE_POLY_HETER.DATA
output not found. Using default value 'true'.
output_dir not found. Using default value 'output'.
Program threw an exception: IOConfig: Reading GRIDFILE keyword from GRID section: Output of GRID file is not supported
terminate called after throwing an instance of 'std::runtime_error'
what(): IOConfig: Reading GRIDFILE keyword from GRID section: Output of GRID file is not supported
Aborted
```
As it turns out initializing the Geology on a distributed grid
result in wrong values for e.g. saturation. Therefore with this
commit we resort to initializing the global geology and distribute
it using communication.
Previously we used the size of the communicator within the CpGrid to check
whether we are running in parallel and need to redistribute the grid.
Unfortunately, this is MPI_COMM_SELF until we actually loadbalance and redistribute.
Therefore we now use the size of the MPI_Helper (i.e. MPI_COMM_WORLD) which
gives us the number of all available processes.
Note that the wrong behaviour was provoked with 656e5de331. Before that we
redistributed in any case which luckily included runs with more than 1 process.