The layout parameter has been deprecated some time ago and now removed
in DUNE > 2.7. This commit fixes its last usage in opm-models makes it
compiler with DUNE master.
It's not possible to have `constexpr std::string`s in C++17. Taking
`std::string_view` gives conversion errors. Since this is all temporary
and will be replaced by pure runtime parameters anyway, use string
literals for the moment.
the flags which I used are
```
-pedantic \
-Wall \
-Wextra \
-Wformat-nonliteral \
-Wcast-align
-Wpointer-arith \
-Wmissing-declarations \
-Wcast-qual \
-Wshadow
-Wwrite-strings \
-Wchar-subscripts \
-Wredundant-decls \
-fstrict-overflow \
-O3 \
-march=native \
-DNDEBUG=1
```
note that some heavy filtering is not the worst idea because DUNE is
far from not emiting any warnings with these flags.
Also, there were some pesky warnings in test_ecl_output which I don't
know how to fix:
```
tests/test_ecl_output.cc:218:73: warning: missing initializer for member ‘Opm::data::Connection::effective_Kh’ [-Wmissing-field-initializers]
```
so far, the actual specializations of the simulator were compiled into
the `libopmsimulators` library and the build of the glue code
(`flow.cpp`) thus needed to be deferred until the library was fully
built. Since the compilation of the glue code requires a full property
hierarchy for handling command line parameters, this arrangement
significantly increases the build time for systems with a sufficient
number of parallel build processes. ("sufficient" here means 8 or more
threads, i.e., a quadcore system with hyperthreading is sufficient
provided that it has enough main memory.)
the new approach is not to include these objects in
`libopmsimulators`, but to directly deal with them in the `flow`
binary. this allows all of them and the glue code to be compiled in
parallel.
compilation time on my machine before this change:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
Scanning dependencies of target opmsimulators
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_gasoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_blackoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_solvent.cpp.o
[ 4%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_polymer.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_energy.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater_polymer.cpp.o
[ 6%] Linking CXX static library lib/libopmsimulators.a
[ 97%] Built target opmsimulators
Scanning dependencies of target flow
[100%] Building CXX object CMakeFiles/flow.dir/examples/flow.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m45.692s
user 8m47.195s
sys 0m11.533s
```
after:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
[ 91%] Built target opmsimulators
Scanning dependencies of target flow
[ 93%] Building CXX object CMakeFiles/flow.dir/flow/flow.cpp.o
[ 95%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_gasoil.cpp.o
[ 97%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_solvent.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_blackoil.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_energy.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m21.597s
user 8m49.476s
sys 0m10.973s
```
(this corresponds to a ~20% reduction of the time spend on waiting for
the compiler.)
- Change the brief description slightly
- Do not print anything anymore if there are no unused parameters
- Change the boiler plate text for printing the parameters to the
PRT/DBG files
in part, this has been requested by [at]atgeirr.
this has been requested by [at]atgeirr.
Note: The FlowLinearSolverVerbosity, FlowNewtonMaxIterations and
FlowNewtonMinIterations parameters are still prefixed because they
clashes with parameters registered deeply within eWoms.
while they do no longer appear in the help message, in the code they
are still there and can be specified and used as normal.
also, this patch makes --print-parameters=1 and --print-properties=1
work.
this has several advanges:
- a consistent and complete help message is now printed by passing the
-h or --help command line parameters. most notably this allows to
generically implement tab completion of parameters for bash
- the full list of runtime parameters can now be printed before the simulator
has been run.
- all runtime parameters understood by ebos can be specified
- no hacks to marry the two parameter systems anymore
- command parameters now follow the standard unix convention, i.e.,
`--param-name=value` instead of `param_name=value`
on the negative side, some parameters have been renamed and the syntax
has changed so calls to `flow` that specify parameters must adapted.
this is only relevant people who are masochistic enough to go beyond
`-Wall`. (note that at this warning level, there is plenty of noise from
Dune and other upstream dependencies.)
further, this cleans up the code of the parameter system and the
startup routines a bit and finally, it adds positional parameters
support to ebos as well as brief descriptions to ebos and the lens
problem.
The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
Dune::set_singularity_limit() was removed and the ILU preconditioners
seem to have been refactored. The ILU refactoring included making the
order of the preconditioner a template parameter of the preconditioner
class, i.e., it can no longer be specified at runtime.
Note that the AMG code in the dune master currently produces quite a
few warnings because of the latter point, but as far as I can see,
there is nothing which can be done about this from outside of
dune-istl.
IMO the term "vanguard" expresses better what these classes are
supposed to do: level the ground for the cavalry. Normally this simply
means to create and distribute a grid object, but it can become quite
a bit more complicated, as exemplified by the vanguard classes of
ebos..
instead of passing a "minimal" fluid state that defines the
thermodynamic conditions on the domain boundary and the models
calculating everything they need based on this, it is now assumed that
all quantities needed by the code that computes the boundary fluxes
are defined. This simplifies the boundary flux computation code, it
allows to get rid of the `paramCache` argument for these methods and
to potentially speed things up because quantities do not get
re-calculated unconditionally.
on the flipside, this requires slightly more effort to define the
conditions at the boundary on the problem level and it makes it less
obvious which quantities are actually used. That said, one now has the
freedom to shoot oneself into the foot more easily when specifying
boundary conditions and also tools like valgrind or ASAN will normally
complain about undefined quantities if this happens.
according to wikipedia the term "heat" is the energy transferred due
to a temperature gradient, i.e., it only makes sense if such a
gradient is present and this is not necessary for the storage term.
this means that technically the term "heat conductivity" is
meaningful, but "thermal conductivity" is IMO more consistent.
this has partially already been done in opm-material and eWoms it was
pretty inconsistent, so it also requires a patch in opm-material.
it broke because of the recent refactoring of the energy material laws
in opm-material. The reason why nobody noticed is that this test
requires dune-alugrid to be compiled.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
Note 1: The initialization code now always consider 3 phases.
For 2-phase cases a trivial (0) state is returned.
Note 2: The initialization code does not compute a BlackoilStats,
but instead pass the initialization object with the initial state.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.
so far, the linker bailed out due to duplicate definitions of
variables if multiple compile units used the same type tag. This is
problematic if the sources are split into separate compile units and
that use the same type tag; in particular, this applies for
traditional libraries.
Due to various C++ peculiarities, this patch complicates the internal
implementation of the property system quite a bit, but given that the
usage of it (as well as the compile time) stay unchanged, I do not
consider this to be a big problem. Note that the introspection code is
particularly problematic because it needs static initializers that do
not cause the linker to choke in the case of multiple compile units.
Finally, to prevent future regressions, this patch adds a unit test
for the lens problem which uses multiple compile units. (This test is
called lens_immiscible_ecfv_ad_mcu and basically identical to the
existing lens_immiscible_ecfv_ad test and I thus think that it is
pretty unimaginative -- improvement proposals are welcome.)
there seems to be only a *very* limited amount of interest, the code
of the model is quite complex and there are currently no suitable
discretizations for free-flow equations in eWoms (i.e., the model
tends to be very unstable and oscillates a lot). Combined, all of this
makes maintaining this model a pain in the back, so let's remove it
some interest in these kinds of problems surfaces and until
appropriate discretizations -- like staggered grid methods -- are
available.
This works by having a "focus degree of freedom" during
linearization. When evaluating the local residual, all derivatives of
the residual/fluxes are with regard to the primary variables of that
DOF.
The two main offenders were the Forchheimer velocity model and the
model for the Stokes equations. To ensure that they continue to work,
the "powerinjection" and the "stokestest2c" problems are now both
compiled and tested with both, automatic differentiation and finite
differences, and the results of these tests is compared against the
same reference solution.
The majority of the time required to develop this patch was actually
required for testing: All tests compile and pass with debugging and
aggressive optimization flags with at least GCC 5, GCC 7 and clang
3.8, as well as Dune 2.3 and 2.4. Also, the results of flow_ebos stay
identical for Norne whilst the performance difference is below the
measurement noise on my machine. (the version with this patch applied
was actually about 1% faster.)
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator
Now the actual pressure and transport model classes are not specified,
but taken as template template parameters, also grid and well model
are templates for both the sequential model and the simulator class,
although at this point only StandardWells is expected to work with
the sequential model.
inconsistent and unnecessary.
this is purely a cosmetic change, the only exception was a function with
the generic name 'split', which was renamed to splitParam to avoid confusion.
Motivated by
- proliferation of identical code
- need to avoid strange behaviour with "." directory on some boost versions
- potenial for further refactoring to avoid boost entirely
this information is already part of the EclipseState. The reason why
this should IMO be avoided is that this enforces an implementation
detail (ordering of the permeability matrices) of the simulator on the
well model. If this needs to be done for performance reasons, IMO it
would be smarter to pass an array of matrices instead of passing a raw
array of doubles. I doubt that this is necessary, though: completing
the full Norne deck takes about 0.25 seconds longer on my machine,
that's substantially less than 0.1% of the total runtime.
this information is already part of the EclipseState. The reason why
this should IMO be avoided is that this enforces an implementation
(ordering of the permeability matrices) the simulator on the well
model. If this needs to be done for performance reasons, IMO it would
be smarter to pass an array of matrices, instead of passing a raw
array of doubles. I doubt that this is necessary, though: completing
the full Norne deck takes about 0.25 seconds longer on my machine,
that's substantially less than 0.1% of the total runtime.
in order to avoid code duplication, the permeability extraction
function of the RockFromDeck class is now made a public static
function and used as an implementation detail of the WellsManager.
finally, the permfield_valid_ attribute is removed from the
RockFromDeck class because this data was unused and not accessible via
the class' public API.
the groundwater problem should be symmetric because it uses an
incompressible fluid, is a single phase problem and uses the
immiscible model. (i.e., there should never be a difference between
the upstream and the downstream cells.)
the main purpose of this commit is to have a test that uses a linear
solver wrapper which was generated by the internal
EWOMS_WRAP_ISTL_SOLVER macro.
... and use the restarted GMRES solver in conjunction with a ILU-2
preconditioner for the water-air unit test.
I do not really recommend using these solvers because BiCGSTAB tends
to be 20% to 30% slower than our home-brewn implementation (this is
because the dune-istl solvers cannot use custom convergence criteria),
but dune-istl offers more choices than just BiCGStab and this
functionallity could be helpful when debugging issues related to
solving the linear systems of equations.
Note that regardless of how pedantic the interpretation of DUNE's
license is, there are no licensing issues with this code because we do
not distribute any files derived from DUNE anymore.
i.e., the solvers.hh file is removed. The main reason for this is that
it avoids having to distribute a file with a potentially incompatible
license (i.e., GPLv2 + template exception vs GPLv2+), but the
home-brewn bicgstab solver also seems to perform a tiny bit better.
* origin/master:
Do not throw for unrecognized file when merging log files.
Do not populate cellData but issue a warning in parallel.
Removed ternary operator in inline initialization.
Correctly mark transfer of ownership for ouptut writer
Indent nested #if
Remove Solution.sdc assignment
Cater variable name change in BCRSMatrix of DUNE 2.5
Fix using local active cells for writing eclipse files in parallel.
add restart test for SPE1CASE2_ACTNUM
rename the 'flow' binary to 'flow_legacy' and set a symbolic link
Added ctest for restart files
i.e., using clang 3.8 to compile the test suite with the following
flags:
```
-Weverything
-Wno-documentation
-Wno-documentation-unknown-command
-Wno-c++98-compat
-Wno-c++98-compat-pedantic
-Wno-undef
-Wno-padded
-Wno-global-constructors
-Wno-exit-time-destructors
-Wno-weak-vtables
-Wno-float-equal
```
should not produce any warnings anymore. In my opinion the only flag
which would produce beneficial warnings is -Wdocumentation. This has
not been fixed in this patch because writing documentation is left for
another day (or, more likely, year).
note that this patch consists of a heavy dose of the OPM_UNUSED macro
and plenty of static_casts (to fix signedness issues). Fixing the
singedness issues were quite a nightmare and the fact that the Dune
API is quite inconsistent in that regard was not exactly helpful. :/
Finally this patch includes quite a few formatting changes (e.g., all
occurences of 'T &t' should be changed to `T& t`) and some fixes for
minor issues which I've found during the excercise.
I've made sure that all unit tests the test suite still pass
successfully and I've made sure that flow_ebos still works for Norne
and that it did not regress w.r.t. performance.
(Note that this patch does not fix compiler warnings triggered `ebos`
and `flow_ebos` but only those caused by the basic infrastructure or
the unit tests.)
v2: fix the warnings that occur if the dune-localfunctions module is
not available. thanks to [at]atgeirr for testing.
v3: fix dune 2.3 build issue
Previously, the eclipseGrid used by EclipseWriter was constructed from
the one in the EclipseState with the current CpGrid. Unfortunately the
latter was the distributed version resembling only the local part that
the processor works on. Therefore the information about the active cells
was wrong when writing results (which raised an exception in the writer).
With this commit we construct the EclipseWriter before distributing the grid
and use this writer later on in the OutputWriter.
10^-4 lead to sporadic results if the final tolerance of the solution
really was 10^-4. (it currently is usually better because each time
step experiences an additional update after the Newton method deems it
to be converged.)
this problem did not work properly anyway: it oscillated like hell
(very likely to the spatial discretization used being inappropriate)
and it did not even converge if more than a single iteration was
required.
these are:
- remove the unused methods "baseEpsilon()" and "numericEpsilon()"
from FvBaseAdLocalLinearizer. (they are only meaningful in the
context of finite differences.)
- correct/update some comments
- replace most occurences of Toolbox::createConstant() with
assignments to floating point values to unclutter the code a bit.