it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator
Now the actual pressure and transport model classes are not specified,
but taken as template template parameters, also grid and well model
are templates for both the sequential model and the simulator class,
although at this point only StandardWells is expected to work with
the sequential model.
Motivated by
- proliferation of identical code
- need to avoid strange behaviour with "." directory on some boost versions
- potenial for further refactoring to avoid boost entirely
this information is already part of the EclipseState. The reason why
this should IMO be avoided is that this enforces an implementation
detail (ordering of the permeability matrices) of the simulator on the
well model. If this needs to be done for performance reasons, IMO it
would be smarter to pass an array of matrices instead of passing a raw
array of doubles. I doubt that this is necessary, though: completing
the full Norne deck takes about 0.25 seconds longer on my machine,
that's substantially less than 0.1% of the total runtime.
this information is already part of the EclipseState. The reason why
this should IMO be avoided is that this enforces an implementation
(ordering of the permeability matrices) the simulator on the well
model. If this needs to be done for performance reasons, IMO it would
be smarter to pass an array of matrices, instead of passing a raw
array of doubles. I doubt that this is necessary, though: completing
the full Norne deck takes about 0.25 seconds longer on my machine,
that's substantially less than 0.1% of the total runtime.
in order to avoid code duplication, the permeability extraction
function of the RockFromDeck class is now made a public static
function and used as an implementation detail of the WellsManager.
finally, the permfield_valid_ attribute is removed from the
RockFromDeck class because this data was unused and not accessible via
the class' public API.
* origin/master:
Do not throw for unrecognized file when merging log files.
Do not populate cellData but issue a warning in parallel.
Removed ternary operator in inline initialization.
Correctly mark transfer of ownership for ouptut writer
Indent nested #if
Remove Solution.sdc assignment
Cater variable name change in BCRSMatrix of DUNE 2.5
Fix using local active cells for writing eclipse files in parallel.
add restart test for SPE1CASE2_ACTNUM
rename the 'flow' binary to 'flow_legacy' and set a symbolic link
Added ctest for restart files
Previously, the eclipseGrid used by EclipseWriter was constructed from
the one in the EclipseState with the current CpGrid. Unfortunately the
latter was the distributed version resembling only the local part that
the processor works on. Therefore the information about the active cells
was wrong when writing results (which raised an exception in the writer).
With this commit we construct the EclipseWriter before distributing the grid
and use this writer later on in the OutputWriter.
it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here