It seems like eclipse ignores NNCs with small transmissibility.
Small means less than 1e-6 for Eclipse (Even if it says that it
is ignoring values below 1e-5 and/or zero values)!.
This commit now implements the same threshold during IO.
Also fixes a bug when applying EDITNNC, it needs to have cell1<=cell2 to work.
Previously the vector of NNCData was passed in as a reference and sorted.
Unfortunately, it needed to be transformed later to meet all prerequisites.
With this commit we do this transformations in sortNncAndApplyEditnnc.
Furthermore EDITNNC data is passed by value as it is not needed
outside and should usually not be to big. It was copied outside anyway!
We first add all NNCs specified in the deck to the ouput
and then determine additional NNCs by iterating over all faces that
connect cells that are not connected in the underlying cartesian grid.
Therefore we need to make sure that we do not output NNCs twice
and for faults that also have a specified NNC we need to substract
the transmissibility specified via NNC.
- when an episode/report step is over, the next is started by endEpisode()
- the problem does not deal with updating the simulation time anymore
- rename `episodeIdx` in to `reportStepIdx` the 'EclWriter' because
this variable is -- and always has been -- the report step number
used by some parts of `opm-output`'s ECL writing code (the report
step number is equivalent to the episode index plus 1). IMO, the
output and parser code should be made more consistent in regard of
whether it expects 0-based or 1-based indices, but this is a story
for another day.
before patch, setting the `EnableEclOutput` parameter to `false`
resulted in the `eclWriter_` not to be allocated; yet it was used in
some places. this resulted in segfaults.
medium term, the output and restart file writing should be refactored:
the simulator does not need to be aware of this because it can be
accomplised in the problem's endTimeStep() method.
this avoids regressions for decks that use well testing and makes
`ebos` work as expected if UMFPACK is not available, but obviously it
will not work for decks that use multisegment wells in earnest.
`flow` is unaffected by this because it does not use this type tag.
Unfortunately, we first created NNC with applied EDITNNC and then
still used the original NNC data to set the transmissibility. Thus
we were actually ignoring EDITNNC.
This commit fixes this by using the data structure that has EDITNNC
applied.
maybe this needs to be reverted since the code in question can
cause the simulation to abort inadvertently.
As usual, `flow` is unaffected because this functionality is only
called in experimental mode and flow calls it itself.
this is part of the release maintainance. in this context "core
headers" means the ones which do not include the well model headers,
and only those which are concerned with non-exotic functionality,
e.g., the PolyhedralGrid and ALUGrid vanguards are not changed.
the only thing which this does so far is to introduce the respective
property and `ebos` will abort the run if the deck requests API tracking.
As usual for experimental features, `flow` is unaffected.
for some reason, this yields quite different results for norne than
the default variant, e.g. when comparing PRESSURE, we get
```
> compareECL -k PRESSURE -t UNRST ebos/NORNE_ATW2013 ebos_altidx/NORNE_ATW2013 1 1e-4
Comparing 'ebos/NORNE_ATW2013' to 'ebos_altidx/NORNE_ATW2013'.
Comparing PRESSURE...
Occurrence in first file = 9
Occurrence in second file = 9
Value index = 0
(first value, second value) = (254.195, 253.191)
Program threw an exception: [/home/and/src/opm-common/build-cmake/fake-src/examples/test_util/EclRegressionTest.cpp:161] Deviations exceed tolerances.
The absolute deviation is 1.00311, and the tolerance limit is 1.
The relative deviation is 0.00394624, and the tolerance limit is 0.0001.
```
IMO this is a bug, but the reasons for it are currently unknown.
these variants should cover most of the common use cases. That said,
there are no plans to provide simulators for combinations of blackoil
extensions or a "multiplexing" simulator like `flow`: If someone is
interested in e.g., an oil-water simulator with polymer and energy
enabled, a separate self-compiled executable should be added locally.
the idea is to compensate the residual of the final solution of a time
step by means of an opposing source term in the next time step.
This patch has been developed as a joint project with [at]totto82 and
[at]osae.
(`flow` is unaffected by this because for now drift compensation is an
experimental feature and thus disabled within the production
simulator.)
This enables `ebos` to run Norne and other non-trivial data
sets. While at it, adapt the tolerances by `ebos`.
This patch only affects the research simulator, i.e. `flow` is
unaffected by it.
this bitrot a bit because it was never seen by the compiler. (I still
did not check if `ebos` compiles and works if `CpGrid` is replaced by
dune-alugrid or `PolyhedralGrid`.)
the convergence behaviour can now be understood and the report step
information is printed, too. This does not affect `flow`, becase it
implements its own newton and time stepping routines.
the speedup gained by parallelism here are simply not worth the
headaches.
note that `flow` is unaffected by this because it uses
`Opm::BlackoilWellModel`.
Usage
BCRATE
1 1 1 1 1 10 X WATER 1e-7 /
This will inject 1e-7 of water (mass/time/length/length) on the x side of the
boundary cells with cartesian index [1 1 1] to [1 1 10]
this is a compile time switch with the intention to be able to more
easily turn experimental features that are not yet considered to be
production quality on and off. DUNE has a similar mechanism (i.e., the
`DUNE_GRID_EXPERIMENTAL_GRID_EXTENSIONS` macro), but it relies on
the preprocessor.
For now, the property does not have any effect.
this hopefully makes the purpose of `ebos` clear in its
description. this prose should be interpreted as "if you use ebos in
production, you are on your own and you should only expect a very
limited amount of support (or even sympathy) if something breaks".
in particular the missing synchronization after restarts was very
nasty to find. thanks a ton for pointing this out!
also, IIRC changing DR[SV]DT in the schedule section has been working
properly for a while, so the comment which stated the opposite is
removed as well.
Some time loop stuff was missing in the doobly-doo, the init() method
of the well model was not called and there was the slightly deeper
issue that the initial solutions where not calculated on restarts
which breaks everything that relies on them. (at the moment, that's
everything which is related to non-trivial boundary contitions.)
the purpose of this was a hack to be able to manipulate the Jacobian
matrix directly from outside code. Since `flow` has been converted to
the eWoms wells API, this is not required anymore.
This seems to be covered for types and functions by our coding style
with room for interpretation. For variables the coding styles asks for
underlines though, but nevermind.
The former order resulted of first apply NCC to the grid
transmissibilities and then applying EDITNNC resulted in NNCs being
scaled twices. The reason is that applyNNCToGridTrans_ scales the NNC
with EDITNNC. With the patch the order of the function calls is
reversed to prevent double scaling.
This is includes neighboring connection and NNCs due to faults. In both
cases the transmissibilities of specified via NNC are added to the set or
computed ones.
This is the first step for supporting NNC in flow.
the parameter is called `EclNewtonSumToleranceExponent`. if it is set
to 1, the specified tolerance will be used directly. (this is not
desireable in the general case though, because at the same result
quality, the sum error for large reservoirs can be larger than for
small ones.)
albeit, we scale the error only to the cube root of the pore
volume. the rationale is that the same amount of mass can get lost
"along" a line for each timestep.
maybe it would be a good idea to do something like this for time step
size as well because taking multiple small time steps currently allows
a much larger error in the result than doing it in one big step.
the flags which I used are
```
-pedantic \
-Wall \
-Wextra \
-Wformat-nonliteral \
-Wcast-align
-Wpointer-arith \
-Wmissing-declarations \
-Wcast-qual \
-Wshadow
-Wwrite-strings \
-Wchar-subscripts \
-Wredundant-decls \
-fstrict-overflow \
-O3 \
-march=native \
-DNDEBUG=1
```
note that some heavy filtering is not the worst idea because DUNE is
far from not emiting any warnings with these flags.
Also, there were some pesky warnings in test_ecl_output which I don't
know how to fix:
```
tests/test_ecl_output.cc:218:73: warning: missing initializer for member ‘Opm::data::Connection::effective_Kh’ [-Wmissing-field-initializers]
```
some weird hacks (hello, DR[SV]DT) cause a change of the storage term
in the first Newton-Raphson iteration compared to the solution of the
previous time level. In order to use the correct values, one thus must
explicitly recompute the storage term for the previous time step
instead of just reusing the result of the first Newton-Raphson
iteration of the current time step.
Previously all processes reported
Warning: Fast restart using SAVE is not supported. Standard restart file is written instead.
Now this is done only on the master process where logging is activated.
reads tracer input from deck, solves tracer equation fully implicit as a post processing step in endTimeStep
tested on a simple modified SPE1CASE1 deck and compared with eclipse
TODO: restart and parallel
this allows to assemble the Jacobian matrices directly into the native
format expected by linear solver. So far, only backends using
Dune::BCRSMatrix are provided, but there are work-in-progress patches
for dune-fem, vienna-CL and PETSc backends.