The wells, FIP and initial output of NNCs is still handled
by code in opm-simulators. The plan is to move more of the
functionality to ebos.
All tests pass and MPI restart works
This adapts to the new function signature as of PR 216 in opm-output. It uses the
newly introduced map of integer vectors to print the MPI ranks in a parallel run.
The wellModel is now persistent over the time steps,
with an update method called every reportStep/episode.
This allows the following simplifications:
1. move the wellState to the WellModel
2. add a ref to the ebosSimulator to the wellModel
3. clean up the parameters passed to the wellModel methods
4. move RESV handling to the WellModel and the rateConverter
5. move the econLimit update to the WellModel
the ThreadManager from ebos was not called which resulted in some
havoc when attempting multi-threaded runs.
v2: use opm_get_max_threads() directly. thanks to [at]akva2 for the heads-up.
The old message was not really accurate anymore because flow also
supports the polymer and solvent extensions. (Also, the parentheses
around the version were removed because they are not necessary.)
v2: use the message proposed by [at]atgeirr
v3: re-add accidentially removed website URL
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator
Possible values are none, log, and all. The first does not do any logging
to files. The second does log to files but does not create and log to
the DEBUG file. The latter uses all possible files.
Adds two switches no_prt_log, and no_debug_log that deactivate
writing to PRT and DEBUG file.
One can now run flow_ebos without creating any output by
passing "output=false no_prt_log=true no_debug_log=true"
on the command line.
the only thing that was used of this class was the phase usage object,
but the phase usage object can be accessed via much leaner interfaces.
The old BlackoilPropsFromDeck (without "Ad") is still required to
compute the initial condition, but the init code should be refactored
soon anyway.
now, the dune APIs are used whereever possible and the data is
computed for the global grid, i.e. for parallel runs it does not need
to be gathered across the processes anymore. Also, the INIT file is
now only written once instead of twice.
I've verified that the sequential and the parallel INIT files stay
identical for the Norne case and that the INIT file does not change
w.r.t. before this patch.