When filterConnections_ is called the grid is not load
balanced, yet. Currently that means that grid() will also return the
unbalanced grid and all processes will see the whole global grid.
We will change semantics of the unbalanced grid soon: Only the root
process will see the whole grid and the others will see an empty
partition of it. Hence filtering on this partition will remove all
connections on all wells in the schedule for non-root processes and
produce wrong results.
For non-root process the filtering needs to be done on the load
balanced grid. This is accomplished by this commit.
This removes a deadlock experienced for some models
where we have specified connections to non-active cells.
On non-IO ranks we are using the local grid since in the
future there will be no global grid available. Wells connecting
cells not on these processors are neglected anyway.
Closes#2101
This is needed to support dune-fem where the local mapper might
be
`MultipleCodimMultipleGeomTypeMapper<GridView<GridPart2GridViewTraits<AdaptiveLeafGridPart<CpGrid, (PartitionIteratorType)4,
false> > >, Dune::Impl::MCMGFailLayout>`
as opposed to the global one being
`MultipleCodimMultipleGeomTypeMapper<GridView<DefaultLeafGridViewTraits<CpGrid>>, Dune::Impl::MCMGFailLayout>`.
Closes#2095.
Since the indexMaps do not contain the global element index anymore
(but the global id). The old code did not work anymore.
Unfortunately, we are using CpGrid specific functions (scatterData)
to get the mapping. Therefore this might be broken if other grids are
used.
Previously, it was still assumed that all ranks knew the global grid
and each map on CollectDataToIORank::indexMaps_ was a mapping of
send/receive index to the index of the cell using the mapper of the
corresponding global grid.
With this patch inside of CollectDataToIORank::DistributeIndexMapping
indexMaps is a mapping from send/receive index to global cartesian
index until the destructor is run. Inside of the destructor of the
iorank the remapping to mapped index of the global grid happens and
the ranks array is computed.
This at least slightly improves the old design. In that design the
subclass had no own constructor but inherited the one of the base class.
That base class constructor called certain subclass
functions (createGrids_, filterConnections_, updateOutputDir_, and
finalizeInit_)that would initialize raw pointers of the
subclass. Hence subclasses where not allowed to have non-pod members
and those used later (e.g. deleted in the destructor) had to be
initialized in these functions.
The new (still ugly) design introduces constructors into the
subclasses and skips inheriting constructors. Now one must call a base
class function classImplementationInit which will still call the
functions createGrids_, filterConnections_, updateOutputDir_, and
finalizeInit_, but at least at this point the baseclass is fully
constructed and the subclass is constructed as much as
possible/needed (non-pod types will be initialized now.)
With the change of CPgrid to only holding the grid on one process
it will be an empty grid on all other processes. This has really
strange side effects like Schedule::filterConnections removing all
well perforations on theses processes.
This at least slightly improves the old design. In that design the
subclass had no own constructor but inherited the one of the base class.
That base class constructor called certain subclass
functions (createGrids_, filterConnections_, updateOutputDir_, and
finalizeInit_)that would initialize raw pointers of the
subclass. Hence subclasses where not allowed to have non-pod members
and those used later (e.g. deleted in the destructor) had to be
initialized in these functions.
The new (still ugly) design introduces constructors into the
subclasses and skips inheriting constructors. Now one must call a base
class function classImplementationInit which will still call the
functions createGrids_, filterConnections_, updateOutputDir_, and
finalizeInit_, but at least at this point the baseclass is fully
constructed and the subclass is constructed as much as
possible/needed (non-pod types will be initialized now.)
In particular, the .type() function is renamed to .category(), and
it no longer returns a LibECL type. Similarly, the .num() function
has been renamed to .number().
i.e., the EclProblem does no longer need to implement the
`timeIntegration()` method itself. since `flow` does not use this code
path, it is unaffected.
this value is was chosen to exactly replicate `flow`'s behavior. IMO,
it would be less surprising to set the default to `1`, i.e., the user
needs to specify `--threads-per-process=$N` explicitly if
multithreaded linearization ought to be used.
`mebos` works similarly as `flow`, but in contrast to `flow`, `mebos`
only creates the deck in the common code path whilst the
'EclipseState' and the other higher-level parser objects are always
created internally by the vanguard. this approach avoids code
duplication and the worst effects of parser API creep.
to avoid having to compile non-trivial compile units multiple times,
the actual code of the variants is moved into `ebos_$VARIANT.{hh,cc}`
files and the respective compile units are each put into a small
static library whilst the main function of said libraries are invoked
by either the multiplexed or the respective specialized simulator's
`main()`. This is also somewhat similar of how `flow` works, with the
difference that `mebos` uses the blackoil variant to determine the
parameters it needs to know for parsing the deck instead of
introducing a "fake" type tag for this. The rationale is to reduce
compile time compared to the "fake type tag" approach and -- to a
lesser extend -- avoid unnecessary copy-and-pasting of code. In
particular, this means that for the vast majority of cases, only one
place needs changed in the code for all `ebos` variants if, for
example, the parser API requires further objects in the future.
this makes slightly incorrect decks usable with `ebos`. since the
common `flow` variants use a different code path to parse the deck,
they are unaffected. (as far as I can see, the only variant which
might be affected is `flow_ebos_oilwater_polymer_injectivity` and even
for it `flow`'s multiplexing code will abort the run before the
vanguard is even called.)
The intend is to make the purpose of `ebos` clearer: while it can be
used in production, the stability guarantees are somewhat lower than
for `flow` and testing is a bit less rigorous (most of the time).
In the case where two were direct vertical neighbors in the grid
but not in the underlying cartesian grid (e.g. because of MINPV or
pinch outs), we treated them as NNCs and wrote the transmissibilty
to TRANNC.
With this patch we detect this situation (two neighbor cells with identical
i and i and no active cells between them) and do not create an NNC
in the eclipse output files but write the transmissibility to TRANZ.
It seems like eclipse ignores NNCs with small transmissibility.
Small means less than 1e-6 for Eclipse (Even if it says that it
is ignoring values below 1e-5 and/or zero values)!.
This commit now implements the same threshold during IO.
Also fixes a bug when applying EDITNNC, it needs to have cell1<=cell2 to work.
Previously the vector of NNCData was passed in as a reference and sorted.
Unfortunately, it needed to be transformed later to meet all prerequisites.
With this commit we do this transformations in sortNncAndApplyEditnnc.
Furthermore EDITNNC data is passed by value as it is not needed
outside and should usually not be to big. It was copied outside anyway!
We first add all NNCs specified in the deck to the ouput
and then determine additional NNCs by iterating over all faces that
connect cells that are not connected in the underlying cartesian grid.
Therefore we need to make sure that we do not output NNCs twice
and for faults that also have a specified NNC we need to substract
the transmissibility specified via NNC.
- when an episode/report step is over, the next is started by endEpisode()
- the problem does not deal with updating the simulation time anymore
- rename `episodeIdx` in to `reportStepIdx` the 'EclWriter' because
this variable is -- and always has been -- the report step number
used by some parts of `opm-output`'s ECL writing code (the report
step number is equivalent to the episode index plus 1). IMO, the
output and parser code should be made more consistent in regard of
whether it expects 0-based or 1-based indices, but this is a story
for another day.
before patch, setting the `EnableEclOutput` parameter to `false`
resulted in the `eclWriter_` not to be allocated; yet it was used in
some places. this resulted in segfaults.
medium term, the output and restart file writing should be refactored:
the simulator does not need to be aware of this because it can be
accomplised in the problem's endTimeStep() method.
this avoids regressions for decks that use well testing and makes
`ebos` work as expected if UMFPACK is not available, but obviously it
will not work for decks that use multisegment wells in earnest.
`flow` is unaffected by this because it does not use this type tag.
Unfortunately, we first created NNC with applied EDITNNC and then
still used the original NNC data to set the transmissibility. Thus
we were actually ignoring EDITNNC.
This commit fixes this by using the data structure that has EDITNNC
applied.
maybe this needs to be reverted since the code in question can
cause the simulation to abort inadvertently.
As usual, `flow` is unaffected because this functionality is only
called in experimental mode and flow calls it itself.
this is part of the release maintainance. in this context "core
headers" means the ones which do not include the well model headers,
and only those which are concerned with non-exotic functionality,
e.g., the PolyhedralGrid and ALUGrid vanguards are not changed.
the only thing which this does so far is to introduce the respective
property and `ebos` will abort the run if the deck requests API tracking.
As usual for experimental features, `flow` is unaffected.
for some reason, this yields quite different results for norne than
the default variant, e.g. when comparing PRESSURE, we get
```
> compareECL -k PRESSURE -t UNRST ebos/NORNE_ATW2013 ebos_altidx/NORNE_ATW2013 1 1e-4
Comparing 'ebos/NORNE_ATW2013' to 'ebos_altidx/NORNE_ATW2013'.
Comparing PRESSURE...
Occurrence in first file = 9
Occurrence in second file = 9
Value index = 0
(first value, second value) = (254.195, 253.191)
Program threw an exception: [/home/and/src/opm-common/build-cmake/fake-src/examples/test_util/EclRegressionTest.cpp:161] Deviations exceed tolerances.
The absolute deviation is 1.00311, and the tolerance limit is 1.
The relative deviation is 0.00394624, and the tolerance limit is 0.0001.
```
IMO this is a bug, but the reasons for it are currently unknown.
these variants should cover most of the common use cases. That said,
there are no plans to provide simulators for combinations of blackoil
extensions or a "multiplexing" simulator like `flow`: If someone is
interested in e.g., an oil-water simulator with polymer and energy
enabled, a separate self-compiled executable should be added locally.
the idea is to compensate the residual of the final solution of a time
step by means of an opposing source term in the next time step.
This patch has been developed as a joint project with [at]totto82 and
[at]osae.
(`flow` is unaffected by this because for now drift compensation is an
experimental feature and thus disabled within the production
simulator.)
This enables `ebos` to run Norne and other non-trivial data
sets. While at it, adapt the tolerances by `ebos`.
This patch only affects the research simulator, i.e. `flow` is
unaffected by it.
this bitrot a bit because it was never seen by the compiler. (I still
did not check if `ebos` compiles and works if `CpGrid` is replaced by
dune-alugrid or `PolyhedralGrid`.)
the convergence behaviour can now be understood and the report step
information is printed, too. This does not affect `flow`, becase it
implements its own newton and time stepping routines.
the speedup gained by parallelism here are simply not worth the
headaches.
note that `flow` is unaffected by this because it uses
`Opm::BlackoilWellModel`.
Usage
BCRATE
1 1 1 1 1 10 X WATER 1e-7 /
This will inject 1e-7 of water (mass/time/length/length) on the x side of the
boundary cells with cartesian index [1 1 1] to [1 1 10]
this is a compile time switch with the intention to be able to more
easily turn experimental features that are not yet considered to be
production quality on and off. DUNE has a similar mechanism (i.e., the
`DUNE_GRID_EXPERIMENTAL_GRID_EXTENSIONS` macro), but it relies on
the preprocessor.
For now, the property does not have any effect.
this hopefully makes the purpose of `ebos` clear in its
description. this prose should be interpreted as "if you use ebos in
production, you are on your own and you should only expect a very
limited amount of support (or even sympathy) if something breaks".
in particular the missing synchronization after restarts was very
nasty to find. thanks a ton for pointing this out!
also, IIRC changing DR[SV]DT in the schedule section has been working
properly for a while, so the comment which stated the opposite is
removed as well.
Some time loop stuff was missing in the doobly-doo, the init() method
of the well model was not called and there was the slightly deeper
issue that the initial solutions where not calculated on restarts
which breaks everything that relies on them. (at the moment, that's
everything which is related to non-trivial boundary contitions.)
the purpose of this was a hack to be able to manipulate the Jacobian
matrix directly from outside code. Since `flow` has been converted to
the eWoms wells API, this is not required anymore.