No templates involved, no reason to keep it in header. This also makes
building more robust by only invoking HAVE_MPI in the cpp file, after
including config.h.
so far, the actual specializations of the simulator were compiled into
the `libopmsimulators` library and the build of the glue code
(`flow.cpp`) thus needed to be deferred until the library was fully
built. Since the compilation of the glue code requires a full property
hierarchy for handling command line parameters, this arrangement
significantly increases the build time for systems with a sufficient
number of parallel build processes. ("sufficient" here means 8 or more
threads, i.e., a quadcore system with hyperthreading is sufficient
provided that it has enough main memory.)
the new approach is not to include these objects in
`libopmsimulators`, but to directly deal with them in the `flow`
binary. this allows all of them and the glue code to be compiled in
parallel.
compilation time on my machine before this change:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
Scanning dependencies of target opmsimulators
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_gasoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_blackoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_solvent.cpp.o
[ 4%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_polymer.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_energy.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater_polymer.cpp.o
[ 6%] Linking CXX static library lib/libopmsimulators.a
[ 97%] Built target opmsimulators
Scanning dependencies of target flow
[100%] Building CXX object CMakeFiles/flow.dir/examples/flow.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m45.692s
user 8m47.195s
sys 0m11.533s
```
after:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
[ 91%] Built target opmsimulators
Scanning dependencies of target flow
[ 93%] Building CXX object CMakeFiles/flow.dir/flow/flow.cpp.o
[ 95%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_gasoil.cpp.o
[ 97%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_solvent.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_blackoil.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_energy.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m21.597s
user 8m49.476s
sys 0m10.973s
```
(this corresponds to a ~20% reduction of the time spend on waiting for
the compiler.)
this has several advanges:
- a consistent and complete help message is now printed by passing the
-h or --help command line parameters. most notably this allows to
generically implement tab completion of parameters for bash
- the full list of runtime parameters can now be printed before the simulator
has been run.
- all runtime parameters understood by ebos can be specified
- no hacks to marry the two parameter systems anymore
- command parameters now follow the standard unix convention, i.e.,
`--param-name=value` instead of `param_name=value`
on the negative side, some parameters have been renamed and the syntax
has changed so calls to `flow` that specify parameters must adapted.
The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
this class is only used by the legacy simulators, `flow` uses the
`EclWriter` class provided by eWoms. In turn, this class uses the
new-and-shiny "tasklet" mechanism.
simulator
1) Don't depend on legacy code for communicating the data::wells
2) Bugfix. Store globalIdx instead localIdx in data::wells::complitions
3) Move ThreadHandle to ebos
This seems to have been forgotten previously. Now the code int CPRPreconditioner.hpp
uses ParallelOverlappingILU0 instead of SeqILU[0n]/BlockPreconditioner which
makes the code more slim.
The approach is inspired by Geiger's system-amg but we use dune-istl
aggregation AMG for it. On the fine level all unknowns attached to a cell
form a matrix block and are treated fully coupled. To form the first
coarse level system we use only the pressure component to guide the aggregation
and neglect all other unknowns on the fine level. All other level are formed
in the usual way by scalar aggregation.
Currently,it has to be requested for flow_ebos manually by passing
"linear_solver_use_amg=true amg_blackoil_system=true" to it.
these files take the longest to compile. moving them to the beginning
speeds things up forn parallel builds because the remaining compile
can be compiled while dealing with the flow_ebos files while the build
stalls if these files are at the bottom of the list because they are
required for the library.
The wells, FIP and initial output of NNCs is still handled
by code in opm-simulators. The plan is to move more of the
functionality to ebos.
All tests pass and MPI restart works
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
After the restructuring of of the well model, keeping an extra class for
the "Dense" model is not needed. The only thing still left in
WellStateFullyImplicitBlackoilDense was some solvent related stuff, this
PR moves this to WellStateFullyImplicitBlackoil and removes
WellStateFullyImplicitBlackoilDense.
In addition to a cleaning code this PR fixes missing solvent well output.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator
All simulators now use SimulationDataContainer to store intermediate data that
is passed to the output Solution container. This is in cases not the most
efficient way, but it's unified to avoid errors from code duplication.
now we have BlackoilDetails.hpp which contains all stuff that is used
by flow_ebos as well as flow and which does not include anything from
Eigen, and we have BlackoilLegacyDetails.hpp which contains all stuff
that depends on Eigen (and is thus not required by flow_ebos)
some files (e.g., thresholdPressures.hpp) are already missing in the
master version of this file, but most of them were specific to the
`frankenstein` branch.
thanks to [at]atgeirr for noticing this.
* origin/master:
Do not throw for unrecognized file when merging log files.
Do not populate cellData but issue a warning in parallel.
Removed ternary operator in inline initialization.
Correctly mark transfer of ownership for ouptut writer
Indent nested #if
Remove Solution.sdc assignment
Cater variable name change in BCRSMatrix of DUNE 2.5
Fix using local active cells for writing eclipse files in parallel.
add restart test for SPE1CASE2_ACTNUM
rename the 'flow' binary to 'flow_legacy' and set a symbolic link
Added ctest for restart files
* master: (42 commits)
Let only one rank write to step_timing.txt
Do not refer users to issue tracker if multiple procs log.
Remove unused variable.
Use vector instead of VLA, also add missing includes.
changed: bundle eigen3 in the original tarball for debian
update redhat6 packaging
Bugfix parallel computation of weighted pressure etc.
Fixed uninitialized bug, and added logging/comment
Removed superfluous std::move
Refactoring
Initial version of summary data
Do not store collective communication in the wells object.
Make sure that updateWellControls is called on each process.
Make WellSwitchingLogger work with DUNE 2.3
Schedule::getGroup returns reference, not pointer
Removed warning in WellSwitchLogger::calculateMessageSize
Correctly initialize MPI for multisegment wells test
Changed some names in WellSwitchingLogger
Use speaking name for bool in getCellData
Whitespace and other formatting changes
...
almost all of them were caused by recent changes in the master
branch:
- there were methods added which depend on the types `V` and
`DataBlock`. these do not make much sense in the context of the
frankenstein simulator. Also, these types are defined globally for the
whole Opm namespace in `BlackoilModelBase_impl.hpp` (which should be
prosecuted as a fellony IMO)! Besides this, their names are useless;
'V' is the letter which comes after `U` in the alphabet and when it
comes to computers basically everything can be seen as a chunk of data
(i.e., a `DataBlock`).
- it seems like the new and shiny dense-AD based well model was never
compiled with assertations enabled, at least some asserts referenced
non-existing variables.
- the recent output-related API changes were pretty unfortunate
because they had the effect of tying the (sub-optimal, IMO) internal
structure of the model even closer to the output code: as far as I can
see, `rq` does only make sense if the model works *exactly* like
BlackoilModelBase and friends. (for flow_ebos, this could be
replicated, but first it would be another unnecessary conversion step
and second, most of the quantities in `rq` are of type `ADB` and much
of the "frankenstein" excercise is devoted to getting rid of these.) I
thus reverted back to an old version of the output code and created a
`frankenstein` branch in my personal `opm-output` github fork.
it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
The Todd-Longstaff model is extended to incorporate pressure effects
The solvent viscosity is then caculated as
mu_eff = mu_s^(1-\alpha * \omega) * mu_mix^(\alpha * \omega)
where \omega accounts for the porous media effects and \alpha =
\alpha(pressure) accounts for the miscibility of the solvent and oil
when contacted.
The \alpha values can be given using the TLPMIXPA keyword
If no entries are given to TLPMIXPA the table specified using PMISC will
be used as default.
IF TLPMIXPA does not appear in the grid \alpha = 1 and the pressure
effect is neglected.
This is tested in test_solventprops_ad.cpp