1) Add the possibility for the user to chose between local and global
coordinate permeability in the transmissibility calculations.
2) Trow for CpGrid
3) Add default for switch
Note that this patch does not introduce any real temperature
dependence but only changes the APIs for the viscosity and for the
density related methods. Note that I also don't like the fact that
this requires so many changes to so many files, but with the current
design of the property classes I cannot see a way to avoid this...
this helps to keep the core blackoil model code lean and mean and it
is also less confusing for newbies because the ECL blackoil simulator
is not a "test" anymore.
in case somebody wonders, "ebos" stands for "&eWoms &Black-&Oil
&Simulator". I picked this name because it is short, a syllable, has
not been taken by anything else (as far as I know) and "descriptive"
names are rare for programs anyway: everyone who does not yet know
about 'git' or 'emacs' and tells me that based on their names they
must be a source-code managment system and an editor gets a crate of
beer sponsored by me!
this helps to keep the core blackoil model code lean and mean and it
is also less confusing for newbies because the ECL blackoil simulator
is not a "test" anymore.
in case somebody wonders, "ebos" stands for "&eWoms &Black-&Oil
&Simulator". I picked this name because it is short, a syllable, has
not been taken by anything else (as far as I know) and "descriptive"
names are rare for programs anyway: everyone who does not yet know
about 'git' or 'emacs' and tells me that based on their names they
must be a source-code managment system and an editor gets a crate of
beer sponsored by me!
This code is required in the first place because opm-material always
specifies all parameters in terms of the wetting saturations while the
gas is the non-wetting phase in a gas-oil system.
this does not disrupt the block nature of the linearized matrix
(i.e. Dune::BCRSMatrix is still used), but if the number of auxiliary
equations is smaller than that of the "main" discretization, the
superfluous equations are padded. if the number of additional
equations are larger than that of the equation, additional DOFs are
added.
the biggest change is that it is now based on a new approach: the well
model now always calculates the bottom hole pressure for the full well
when asked for a source term. This change makes it possible to
implement cross flow within wells properly and should also make the
well model physically correct.
Also, the well model now uses the connection transmissibility factor
which makes it possible to use this quantity if it is specified by the
deck...
- satfuncStandard: Unscaled curves, using standard version of the
Gwseg model.
- satfuncEPSBase: Unscaled curves, but using the EPS version of
the Gwseg model. There are some differences between this and the
standard version of Gwseg for derivatives at critical saturations.
The scheme for calculating the derivatives should be discussed.
(Will file a separate issue on this.)
- satfuncEPS_A: Scaled curves. Scaling parameters specified via
SWL family.
- satfuncEPS_B: Scaled curves. Scaling parameters identical to _A
but this time specified via the ENPTVD table. Test currently
suspended due problems with eclipse-state.
- satfuncEPS_C: Scaled curves. Scaling parameters identical to _A
but this time specified via Norne-like syntax (EQUALS, COPY etc.).
Shut wells are not added to the well list and thus not considered in the
simulator.
The shut well test in test_wellsmanager is modified to assert this
behaviour.
BUG: This change provokes an assert in the EclipeWriter as number of
wells in wellstate is different from number of wells in the schedule.
this is necessary because tables now must be queried using
EclipseState instead of directly. This implies that EclipseState can
be instantiated in the first place...
TODO (?): allow EclipseState instatiation for decks without a grid.
before this, gradients had the direction of the face between two
finite volumes, now they exhibit the direction of the two
FV-centers.
For axis-aligned grids the result is identical for interior faces, but
it is different for boundary faces or if faces are not
axis-aligned. This patch fixes the SPE9 troubles with anisotropic
permeabilities on tilted grids...
the goal is to make it faster on computers with many cores: The
easiest way to do this is to ensure that the longest running tests are
not taking too much time and that they need about the same time. Thus
this patch contains the following changes which limits the CPU time
taken by each test to about two minutes in debug mode on my machine:
- the water-air problem using the non-isothermal primary variable
switching model now uses an 16x16 instead of a 32x32 grid. as a
compensation it now runs for a year instead of 5000 seconds and the
global grid refinement is now tested.
- the end time of the lens problem ctests is now 3000 instead of
30000 seconds. The binary itself does not change at all.
- sort the tests in the CMakeLists.txt roughly in the order of their
required time. (this will cause ctest not having to wait for long
running test which were started late for too long.)
This commit adds a simple facility for converting component rates at
surface conditions to voidage rates at reservoir conditions. It is
intentionally limited in scope and meant to be employed only in the
context of class FullyImplicitBlackoilSolver<> or something very
similar. In particular, class SurfaceToReservoirVoidage<> assumes
that it will be used to compute conversion coefficients for
component rates to voidage rates, and that those coefficients will
typically be entered into the coefficient matrix of a linearised
residual.
Add a trivial test just to demonstrate the setup and calling
process. This is not a feature or correctness test.
this means that all code which could potentially throw an exception is
moved to this method(). (In particular FluidSystem::init() proved
troublesome in the past.) Besides avoiding segmentation the faults
which stem from exceptions thrown in constructors, this also has the
advantage that simulations which spend a noticable amount of time to
initialize stop at the "correct" place, i.e. after the "Finish init of
the problem" message was printed by the simulator...
this regressed after time step index of the initial solution was
changed from 0 (actually, this was also 0 for the first time step...)
to -1 in b30af664.
for the legacy C-style grid the unit test is more or less complete (it
does not test FAULTMULT and NNC, etc, but these could be added with
sufficient determination), for Dune::CpGrid it currently does not
really check anything because I have not found a good way for CpGrid
to produce the "global" intersection index of an intersection...
this is required so that the element-centered finite volume method
does not handle each partition of the domain separately. (i.e. so that
fluxes accross faces on the process boundaries are considered) mea culpa!
The fix for this is to also include these entries in the matrix which
uses domestic indices. This required some rather extensive changes to
the blacklisting mechanism as for this it must be possible to
translate the index of a blacklisted entity (i.e., an entity in a
ghost or an overlap cell) to a domestic index (i.e., the corresponding
index in the algebraic overlap).
Note that the code for algebraic overlaps is *fun* and the person who
wrote it should be tarred and feathered. (*ouch* ;)) Seriously: Better
approaches than "lets-throw-this-away-and-use-grid-overlaps" are
deeply appreciated. (The grid overlap is not really useful in Dune
because only "Mickey Mouse grids" like Dune::YaspGrid support it.)
New function well_controls_clone(), implemented in terms of the
public API only, mirrors the objective of function clone_wells(),
only for well control sets. Add a basic test to demonstrate the
function too.
"intensive" means that the value of these quantities at a given
spatial location does not depend on any value of the neighboring
intensive quantities. In contrast, "extensive" quantities depend in
the intensive quantities of the environment of the spatial location.
this change is necessary is because the previous nomenclature was very
specific to finite volume discretizations, but the models themselves
were already rather generic. (i.e., "volume variables" are the
intensive quantities of finite volume methods and "flux variables"
are the extensive ones.)
this basically means using Opm::EclipseState instead of the raw deck
for these keywords.
with this, property modifiers like ADD, MULT, COPY and friends are
supported for at least the PERM* keywords. If additional keywords are
required these can be added relatively easily as well.
no ctest regressions have been observed with this patch on my machine.
i.e. reading the grid properties from EclipseState instead of from the
raw deck. This requires that all deck files exhibit a GRID and a
SCHEDULE section or else EclipseState will throw in the constructor.
For some reason, it changed because of the transition to the primary
variable switching approach. Because I trust the new result more than
the old, let's make this the new reference solution.
this also comes with moving responsibilities around and some smaller
cleanups for the grid creation. (although grid creation could be
possibly done by the simulator now, the GridCreator concept has not
been abandoned, yet...)
To support this the solveSystem methods of the LinearSolverInterface gets
an optional additional template parameter of type boost::any. It can hold any
copy constructable object. In our case it is used to pass the information about
the parallelization into the solvers of dune-istl without the compiler needing to know
their type. Inside of LinearSolverIstl::solveSystem we check whether the type stored inside of
boost::any is the new ParallelIstlInformation. If this is the case we extract the information
and use the parallel solvers if available, otherwise we solve serial/sequential.
The new ParallelIstlInformation is needed as the OwnerOverlapCopyCommunication is not copy
constructable. This is indeed a design flaw that should and will fixed upstream, but for the
time being we need ParallelIstlInformation to transfer the ParallelIndexSet and RemoteIndices
objects.
Conflicts:
opm/autodiff/FullyImplicitBlackoilSolver.cpp
To resolve conflicts, WellState was changed to WellStateFullyImplicitBlackoil
in multiple places, and perfRate() changed to perfPhaseRate() in
WellDensitySegmented.
This test sets up a simple laplace problem and solves it with the available
solvers. It assume that either dune-istl or UMFPack is present, which is
assume to be safe.
this allows to retrieve the name of the problem before it is
instantiated. this is required to be able to print the "Initializing
problem" message at the correct point (i.e., before instantiating the
problem).
normally, this should fine as it was before this patch because the
non-restarted tests produces more timesteps than the restarted one and
only the last file gets verified, but it used to be quite a hack and
one never knows, so it's wise to explicitly specify this as a
dependency.
Also, this change makes sure that ctest is aware of the number of
cores required by a test, which should lead to less contention...
This should make things a much more robust, partially because now the
linear and the non-linear solvers use the same convergence criterion.
Also, this patch includes some collateral indentation improvements.
In summary:
- added RsFunction (base class),
- made NoMixing, RsVD, RsSatAtContact inherit RsFunction,
- RS and RV are no longer template arguments for EquilReg class,
- EquilReg constructor now takes two shared_ptr<Miscibility::RsFunction>,
- use of constructor updated, mostly using make_shared.
most warnings were in DUNE and ALUGrid, but these have been
ignored. Also fixing some of these warnings (in particular the
"parameter foo is unused" ones) would make the code harder to read and
understand, so they have been ignored, too...
This commit adds support for assigning the initial phase pressure
distribution to a subset of the total grid cells. This is needed in
order to fully support equilibration regions. The existing region
support (template parameter 'Region' in function 'phasePressures()')
was only used/needed to define PVT property (specifically, the fluid
phase density) calculator pertaining to a particular equilibration
region.
This commit adds a simple facility for calculating initial phase
pressures assuming stationary conditions, a known reference pressure
in the oil zone as well as the depth and capillary pressures at the
water-oil and gas-oil contacts.
Function 'Opm::equil::phasePressures()' uses a simple ODE/IVP-based
approach, solved using the traditional RK4 method with constant step
sizes, to derive the required pressure values. Specifically, we
solve the ODE
dp/dz = rho(z,p) * g
with 'z' represening depth, 'p' being a phase pressure and 'rho' the
associate phase density. Finally, 'g' is the acceleration of
gravity. We assume that we can calculate phase densities, e.g.,
from table look-up. This assumption holds in the case of an ECLIPSE
input deck.
Using RK4 with constant step sizes is a limitation of this
implementation. This, basically, assumes that the phase densities
varies only smoothly with depth and pressure (at reservoir
conditions).
The pvt interface is extended to handle wet-gas systems:
1. rvSat is added as a function in the PVT interface
2. SinglePvtLiveGas computes the pvt values and its derivatives
3. The old rbub variable is changed to rsSat for clearity
4. The new interface is tested in test_blackoilfluid with data from
liveoil.DATA and wetgas.DATA
instead of directly from the discretization. this should make the
generic quantities re-appear in the output for visualization.
this implies that the reference solutions for the models which
directly derived from the discretization also had to be updated.
This makes eWoms multi-discretization capable. Along the way, this
fixes some bugs and does a medium sized reorganization of the source tree.
This is a squashed patch of the following commits:
--------
1st commit message:
add initial version of the element centered finite volume discretization
currently, it is a misnomer as it is just a copy of the vertex
centered discretization plus some renames...
--------
2nd commit message:
rename [VE]cfvModel -> [VE]cfvDiscretization
--------
3rd commit message:
ecfv: prelimary changes required to make it compile
but not work yet...
--------
4th commit message:
Rename *FvElementGeometry to *Stencil
"Stencil" seems to be the standard expression for this concept...
(also, it is not specific to finite volume methods and is shorter.)
--------
5th commit message:
refactor the stencil class for the element centered finite volume discretization
--------
6th commit message:
ECFV: some work on the stencil class
--------
7th commit message:
ECFV: make the boundary handling code compile
--------
8th commit message:
rename elemContext() to elementContext()
--------
9th commit message:
ECFV: make the VTK output modules compile
--------
10th commit message:
stencil: introduce the concept of primary DOFs
also save an vector of all element pointers in the stencil.
--------
11th commit message:
ECFV: try to fix assembly; add missing timeIdx arguments to the num*() methods
--------
12th commit message:
ECFV: fix stupid mistake in the assembler
--------
13th commit message:
ECFV: remove a few implicit DOF == vertex assumptions
the black-oil example now runs without valgrind complaints until it encounters
a negative oil mole fraction.
--------
14th commit message:
VCFV: make everything compile again
all vertex centered FV examples should now work again...
--------
15th commit message:
rename [ev]cfvmodel.hh to [ev]cfvdiscretization.hh
the classes have already been renamed.
--------
16th commit message:
ECFV: make it work to the point where it can write out the initial solution.
--------
17th commit message:
ECFV: make it work
the local residual/jacobian needed some work in distinguishing primary
and secondary DOFs and there was an minor issue with the serialization
code.
for some reason, it seems still not correct. (-> convergence is too slow.)
--------
18th commit message:
VCFV: make it compile for the black oil model again
--------
19th commit message:
VCFV: make it compile with the remaining models again
--------
20th commit message:
flash model: make it work with ECFV
although this breaks its compatibility with VCFV. (-> next commit)
--------
21st commit message:
adapt the VCFV to make it compatible with the flash model again
--------
22nd commit message:
make all models compile with VCFV again
--------
23rd commit message:
VCFV: more cleanups of the stencil
VcfvStencil now does not have any public attributes anymore. TODO: do
not export attributes in the SubControlVolume and SubControlVolumeFace
classes.
--------
24th commit message:
VCFV: actually update the element pointer
--------
25th commit message:
change the blackoil model back to ECFV
--------
26th commit message:
immiscible model: make it compatible with the ECFV discretization
--------
27th commit message:
PVS model: make it work with ECFV
--------
28th commit message:
NCP model: make it work with ECFV
--------
29th commit message:
rename Vcfv*VelocityModule to *VelocityModule
--------
30th commit message:
richards model: make it work with ECFV
--------
31st commit message:
unify the ECFV and the VCFV VTK output modules
and other cleanups
--------
32nd commit message:
unify the common code of the VCFV and the ECFV disctretizations
--------
33rd commit message:
unify the element contexts between element and vertex centered finite volumes
--------
34th commit message:
unify the local jacobian class of the finite volume discretizations
--------
35th commit message:
replace [VE]vcf(LocalResidual|ElementContext|BoundaryContext|ConstraintsContext) by generic code
--------
36th commit message:
replace the [EV]cfvLocalResidual by generic code
--------
37th commit message:
unify the MultiPhaseProblem and Problem classes, introduce NullBorderListCreator
--------
38th commit message:
remove the discretization specific boundary context
--------
39th commit message:
unify the [EV]cfvDiscretization classes
--------
40th commit message:
Unify [EV]cfvMultiPhaseFluxVariables
--------
41st commit message:
Unify the [EC]cfvNewton* classes
--------
42nd commit message:
Unify [EV]cfvVolumeVariables
--------
43rd commit message:
unify [EV]cfvAssembler
--------
44th commit message:
unified flux variables: fix stupid mistake when calculating pressure gradients
--------
45th commit message:
unify what's to unify for the [EV]CFV properties
--------
46th commit message:
make the method to calculate gradients and values at flux approximation points changeable
Currently, this is used by the vertex centered finite volume method to
be able to use P1-finite element gradients instead of two-point
ones...
--------
47th commit message:
make the restart code work correctly, use the correct DofMapper for VCFV
--------
48th commit message:
actually use the gradient calculator in a model
the immiscible model in this case
--------
49th commit message:
move some files around to where they belong, use the new gradient calculation code in all models
TODO: proper handling of boundary gradients
--------
50th commit message:
fix the stokes model
currently it only works with the vertex centered finite volume
discretization, but the plan is to soon move it to a staggered grid
scheme anyway...
--------
51st commit message:
move all models back to using the vertex centered finite volume discretization by default
--------
52nd commit message:
models: some variable renames and documentation fixes
- scv -> dof
- vert -> dof
- vertex -> dof
- replace 'VCFV'
- fix some typos
--------
53rd commit message:
don't expect UG anymore
since it is quite non-free and hard to get. we now use ALUGrid instead!
--------
54th commit message:
temporarily disable jacobian recycling
--------
55th commit message:
fix writing/reading restart files using the generic code
--------
56th commit message:
fix bug where fluxes were only counted once in the stencil
this only affected the vertex centered finite volumes discretization...
--------
57th commit message:
boundary gradients: use the center of the sub-control volume adjacent to a boundary segment
--------
58th commit message:
make it compile on GCC
--------
59th commit message:
get rid of most hacks
for this, partial reassemble and jacobian recycling was brought
back. For the this and the remaining stuff the main trick is the
introduction of the GridCommHandleFactory concept which constructs
communication handles suited for the respective spatial
discretization...
--------
60th commit message:
fix a few annoying bugs
first, default the convergence criterion for the linear solver did not
honor the initial residual which lead to linear solver breakdowns,
then some debugging code was left in the discrete fracture model and
then there was a bug in the TP gradient approximation class...
this has the consequence that we need a new reference solution for the
discrete fracture problem...
--------
61st commit message:
iterative linear solver: remove the code for the non-default convergence criteria
--------
62nd commit message:
provide the FE cache instead of the local FE
this fixes a segfault in the stokes model caused by the fact that the
local FE was not initialized at this point.
--------
63rd commit message:
(Navier-)Stokes: fix bug due to the transition to unit normals
now, all tests pass for this branch. The only things which need to be
fixed are some annoying performance regressions compared to master and
some bug in the splices feature of the property system...
--------
64th commit message:
some fix for the local residual of the immiscible model
--------
65th commit message:
Navier-Stokes: implement SCV center gradients
There seems to be a bug in the previous implementation (the jacobian
inverse transposed is evaluated using the local, not the global
geometry), so the reference solution for the stokes2c test problem has
also been updated...
--------
66th commit message:
remove the ALUGrid specialization of the LensGridCreator and the YaspGrid one for the fingerproblem
using different grid seems to sometimes cause a different vertex
order, which in turn causes the respective test to fail if the
reference solution was computed using the other grid...
--------
67th commit message:
VCFV: use the correct BorderListCreator
this makes MPI parallel computations work again. apart from
performance regressions, this branch does not exhibit any known
regressions compared to master anymore...
--------
68th commit message:
make verything compile with the element centered finite volume discretization
except the Navier-Stokes and the two-phase DFM models, of course...
--------
69th commit message:
minor fixes
- make the navier-stokes model slighly more generic by using the
proper (in,ex)teriorIndex() methods on sub-control volumes
- make the signature of the calculateValue() template method of the
common two-point gradient approximator match the one of the vertex
centered finite volume one
--------
70th commit message:
fix fallout from the Big Rebase
--------
71st commit message:
ECFV: some bugs in the boundary
--------
72nd commit message:
make computeFlux() compute area-specific quantities
--------
73rd commit message:
fix more bugs in the element centered FV discretization
now eWoms should match Dumux pretty closely...
--------
74th commit message:
coalesce the common code of the multi phase porous medium models into "MultiPhaseBaseModel"
--------
75th commit message:
update reference solutions
these were changed because of the screw-up with the area of boundary
segments...
--------
76th commit message:
rename "ImplicitBase" to "FvBase"
because in eWoms, everything is implicit and these are currently the
base classes for all finite volume discretizations.
--------
77th commit message:
make the spatial discretization selectable using a splice
This requires an opm-core with a the patches from
https://github.com/OPM/opm-core/pull/446 merged...
--------
78th commit message:
rename the properties used for splices to *Splice
--------
79th commit message:
move the files in 'tests/models' to 'tests'
since 'tests' was empty except for the 'models' subdirectory...
--------
80th commit message:
improve and fix the tutorial
--------
81st commit message:
remove the -fno-strict-aliasing flag from the provided option files
seems like recent versions of Dune have been adapted...
--------
82nd commit message:
also compile all CO2 injection simulations using the element centered finite volume discretization
--------
83rd commit message:
PVS model: make it work properly with the element-centered finite volume discretiation
because DOF != number of vertices
it seems like shellcheck complains about this but once the order of
'>' and '2>&1' is reversed, bash does not like it anymore. since
making bash happy is way more important than shellcheck, let's revert
this part of the change...
this renames the 'test' directory to 'tests' and 'test/implicit' to
'tests/models'. the latter change reflects the fact that in eWoms all
models are implicit since the IMPET models have been removed.
our policy is that we only use boost if necessary, i.e., if the oldest
supported compiler does not support a given feature but boost
does. since we recently switched to GCC 4.4 or newer, std::shared_ptr
is available unconditionally.
The current implementations of IncompPropertiesInterface are very
all-or-nothing. In some situations, you want to read rock and fluid
properties from an Eclipse file, but use analytical functions for
the unsaturated properties. Or you want to update properties based
on a marching filter.
This patch provides a way to mix various property objects, or to
"shadow" the properties with a raw array of data, so you don't have
to reimplement the entire interface just to make a small change.
The numbers in the deck are more indicative of FIELD unit conventions
than METRIC unit conventions, so allow the input parser to interpret
the data in that manner.
The former is more assertive than the latter and provides better
diagnostics. Incidentally, switching to *_EQUAL() also fixes an
assignment that was (probably) intended to be an equality test:
BOOST_CHECK(count = num)
in both test cases.
Specifically,
- #include <config.h> where appropriate (all .cpp files)
- Adjust include statements to account for sub-directory locations
of .hpp files.
if one of the files matches (fuzzyly, obviously), the test counts as
passed if none matches it counts as failed. this lets us add reference
results for hosts that need the same number of time steps, but for
whatever reason produce a different result. also, the number of time
steps required to produce the result do not matter anymore, as long as
the data is the same (in the fuzzy sense).
- run one simulation in parallel if MPI is available
- test the parameter passing infrastructure
- reduce end time of the navier stokes problem to 1e-3 to make it pass
on the first try
- add CUSTOM_CXX_FLAGS to CMakeLists.txt to allow passing compiler
flags from the command line to make clang shut up about the dune
issues it encounters.