make it a template for grid types. this allows using
explicit template instantation and compile this code
only once per grid type, instead of once per simulator object.
Adds a simple test case for gas lift optimization. Currently this is
very simplistic and only covers a fraction of the gas lift optimization
code. The plan is to use this as a building block to add more tests
in the future.
Also, tentative changes to compile the FPGA library from a different module: this part needs to be revised because it assumes a fixed path for the OPM/FPGA module.
Modified CMakeLists_files.cmake to remove files moved to the OPM/FPGA module.
BlackoilWellModel now stores an instance of this class for each
well. Inside that class there is a custom communicator that only
contains ranks that will have local cells perforated by the well.
This will be used in the application of the distributed well operator.
This is another small step in the direction of distributed wells,
but it should be safe to merge this (note creation of the custom
communicators is a collective operation in MPI but done only once).
- The edit manipulations from EDITNNC have already been applied to the NNC data
from opm-common
- The NNC data structures are guaranteed to be ordered, both with cell1 <= cell2
and the NNCs are in ascending order
- The NNC output to EGRID / INIT files is based on std::vector<NNCdata>
Switches between using the logarithmic and unit scaling factor based
on whether or not the well has an explicit, positive drainage radius
(WELSPECS item 7). Does presently not include the D factor.
Add a set of unit tests to exercise the facility.
These codes are reimplemented in the ebos simulator and should
be reused, instead. This commit factilitates this and starts
reusing the logging setup code in ebos. Hence reduces code duplication.
Also introduce new class WellModelAsLinearOperator making a well model
into an actual Dune::LinearOperator, this prevents the TypeTag dependent
type from leaking into the type of the WellModelMatrixAdapter instantiation.
As a side benefit, the adapter classes can now adapt (i.e. combine with a
matrix operator) any linear operator.
Pull request #2521 forgot to add Main.hpp to CMakeLists_files.cmake.
Adding Main.hpp to CMakeLists_files.cmake such that OpmInstall.cmake will
install the file to $CMAKE_INSTALL_PREFIX/include/opm/simulators/flow
when running "make install".
They are attached to the cells as well and are now distributed
during CpGrid::loadBalance. Due to this change we also rename
FieldPropsDataHandle to PropsCentroidsDataHandle.
The created data handle for the communication could in theory be used
with other DUNE grids, too. In reality we will need to merge with the
handle that ALUGrid already uses to communicate the cartesian indices.
This PR gets rid of using the get_global_(double|int) method in
ParallelEclipseState and reduces the amount of boilerplate code there.
these are wrappers sitting on top of the EclipseState and
FieldPropsManager classes.
The former has some additional methods related to parallelism,
and the latter is a parallel frontend to the FieldPropManager which
only hold the process-local data (in compressed arrays).
This PR remove the usage of well_control_ from opm-core
and instead uses the control classes for wells and groups
from opm-common.
This PR also removes the usage of the group classes from
opm-core.
This replaces the makePreconditoner() function. The main advantage
is that it is extensible, making it easy to for example add new
preconditioners to the factory at runtime. It supports both parallel
and serial preconditioners.
Also use it in flow_blackoil_dunecpr.cpp. Adds new command-line parameter,
--linear-solver-configuration-json-file, to read linear solver config from
JSON-format file at runtime.
It seems like eclipse ignores NNCs with small transmissibility.
Small means less than 1e-6 for Eclipse (Even if it says that it
is ignoring values below 1e-5 and/or zero values)!.
This commit now implements the same threshold during IO.
Also fixes a bug when applying EDITNNC, it needs to have cell1<=cell2 to work.
This commit extends class WellStateFullyImplictBlackoil to report
segment-related quantities as Opm::data::Segment objects (included
in Opm::data::WellRates objects). All wells have at least a top
segment in the context of WellState FIBO, so there is a meaningful
value to report for each well.
We put the extraction of segment-related quantities into a new
helper function
WellStateFullyImplicitBlackoil::reportSegmentResults()
to avoid cluttering up the body of report() more than absolutely
needed.
The primary use-case for this is assigning appropriate values to
items 8 through 11 of restart vector RSEG. In turn, this will
enable restoring these quantities from a restart file.
it is not used anymore. A lot of related implementation has been moved
to WellTestState.
Its existence makes some logic rather confusing and some new development
not easy.
This allows (re)moving of the following files
opm/autodiff/RateConverter.hpp
opm/autodiff/Compat.cpp
opm/autodiff/Compat.hpp
opm/core/props/BlackoilPropertiesInterface.hpp
opm/core/simulator/BlackoilState.cpp
opm/core/simulator/BlackoilState.hpp
opm/core/simulator/BlackoilStateToFluidState.hpp
opm/core/utility/initHydroCarbonState.hpp
opm/polymer/PolymerBlackoilState.cpp
opm/polymer/PolymerBlackoilState.hpp
tests/test_blackoilstate.cpp
No templates involved, no reason to keep it in header. This also makes
building more robust by only invoking HAVE_MPI in the cpp file, after
including config.h.
so far, the actual specializations of the simulator were compiled into
the `libopmsimulators` library and the build of the glue code
(`flow.cpp`) thus needed to be deferred until the library was fully
built. Since the compilation of the glue code requires a full property
hierarchy for handling command line parameters, this arrangement
significantly increases the build time for systems with a sufficient
number of parallel build processes. ("sufficient" here means 8 or more
threads, i.e., a quadcore system with hyperthreading is sufficient
provided that it has enough main memory.)
the new approach is not to include these objects in
`libopmsimulators`, but to directly deal with them in the `flow`
binary. this allows all of them and the glue code to be compiled in
parallel.
compilation time on my machine before this change:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
Scanning dependencies of target opmsimulators
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_gasoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_blackoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_solvent.cpp.o
[ 4%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_polymer.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_energy.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater_polymer.cpp.o
[ 6%] Linking CXX static library lib/libopmsimulators.a
[ 97%] Built target opmsimulators
Scanning dependencies of target flow
[100%] Building CXX object CMakeFiles/flow.dir/examples/flow.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m45.692s
user 8m47.195s
sys 0m11.533s
```
after:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
[ 91%] Built target opmsimulators
Scanning dependencies of target flow
[ 93%] Building CXX object CMakeFiles/flow.dir/flow/flow.cpp.o
[ 95%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_gasoil.cpp.o
[ 97%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_solvent.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_blackoil.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_energy.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m21.597s
user 8m49.476s
sys 0m10.973s
```
(this corresponds to a ~20% reduction of the time spend on waiting for
the compiler.)
this has several advanges:
- a consistent and complete help message is now printed by passing the
-h or --help command line parameters. most notably this allows to
generically implement tab completion of parameters for bash
- the full list of runtime parameters can now be printed before the simulator
has been run.
- all runtime parameters understood by ebos can be specified
- no hacks to marry the two parameter systems anymore
- command parameters now follow the standard unix convention, i.e.,
`--param-name=value` instead of `param_name=value`
on the negative side, some parameters have been renamed and the syntax
has changed so calls to `flow` that specify parameters must adapted.
The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
this class is only used by the legacy simulators, `flow` uses the
`EclWriter` class provided by eWoms. In turn, this class uses the
new-and-shiny "tasklet" mechanism.
simulator
1) Don't depend on legacy code for communicating the data::wells
2) Bugfix. Store globalIdx instead localIdx in data::wells::complitions
3) Move ThreadHandle to ebos
This seems to have been forgotten previously. Now the code int CPRPreconditioner.hpp
uses ParallelOverlappingILU0 instead of SeqILU[0n]/BlockPreconditioner which
makes the code more slim.
The approach is inspired by Geiger's system-amg but we use dune-istl
aggregation AMG for it. On the fine level all unknowns attached to a cell
form a matrix block and are treated fully coupled. To form the first
coarse level system we use only the pressure component to guide the aggregation
and neglect all other unknowns on the fine level. All other level are formed
in the usual way by scalar aggregation.
Currently,it has to be requested for flow_ebos manually by passing
"linear_solver_use_amg=true amg_blackoil_system=true" to it.
these files take the longest to compile. moving them to the beginning
speeds things up forn parallel builds because the remaining compile
can be compiled while dealing with the flow_ebos files while the build
stalls if these files are at the bottom of the list because they are
required for the library.
The wells, FIP and initial output of NNCs is still handled
by code in opm-simulators. The plan is to move more of the
functionality to ebos.
All tests pass and MPI restart works
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
After the restructuring of of the well model, keeping an extra class for
the "Dense" model is not needed. The only thing still left in
WellStateFullyImplicitBlackoilDense was some solvent related stuff, this
PR moves this to WellStateFullyImplicitBlackoil and removes
WellStateFullyImplicitBlackoilDense.
In addition to a cleaning code this PR fixes missing solvent well output.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator