it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
this is needed to avoid linker errors if this class ought to be used
in multiple compile units. IMO the main problem here is the use of an
_impl.hpp file.
the performance summary at the end of a Norne run which are printed by
`flow_ebos` now looks like this on my machine:
```
Total time (seconds): 773.757
Solver time (seconds): 753.349
Assembly time (seconds): 377.218 (Failed: 23.537; 6.23965%)
Linear solve time (seconds): 352.022 (Failed: 23.2757; 6.61201%)
Update time (seconds): 16.3658 (Failed: 1.13149; 6.91375%)
Output write time (seconds): 22.5991
Overall Well Iterations: 870 (Failed: 35; 4.02299%)
Overall Linearizations: 2098 (Failed: 136; 6.48236%)
Overall Newton Iterations: 1756 (Failed: 136; 7.74487%)
Overall Linear Iterations: 26572 (Failed: 1786; 6.72136%)
```
for the flow_legacy family, nothing changes.
Previously the substep summary reports were cumulative, misleading the user.
Also, made output a little more compact and readable, ensuring numbers line up
unless unusually many digits are needed for times and iteration counts.
Motivated by
- proliferation of identical code
- need to avoid strange behaviour with "." directory on some boost versions
- potenial for further refactoring to avoid boost entirely
needed as substep summary reports requires FIP data to be available.
add calculation of this data if output is requested and summary
config holds relevant keywords.
The regex we are using might also consider a file named bla.2.blub.
In that case it is not nice to throw an exception. Instead we print
a message to std::cerr.
This is currently still happening due to the implementation of
OPM_THROW whenever the linear solver does not converge. This
happens quite often and we might not want to get overwhelmed by
the issue tracker.
since the unit code within opm-parser is now a drop-in replacement,
this simplifies things and make them less error-prone.
unfortunately, this requires quite a few PRs. (most are pretty
trivial, though.)
That version does not provide a default constructor for
CollectiveCommunication, Therefore we now use
MPIHelper::getCollectiveCommunication() for the default
constructor argument.
this should not change the value of the result at all (because the
total delta which is added to the phase pressures stays identical),
but it should be less confusing when comparing this with the code that
calculates the gravity correction term in the flux calculation.
maybe it worked as-is, or maybe decks which lead to illegal accesses
to the map are incorrect (i.e., they specify threshold pressures for
EQUIL-regions that do not touch), but let's play save here...
... to calculate phase densities for the threshold pressure
defaults. I don't know if the reference simulator does this, but this
makes it consistent with what's done in the flux calculation of flow.
Previously, we also called it when the full time step was done.
As the simulator writes that information anyway and we cannot call
it a sub step, we omit the final write in the adaptive time stepper.
-- avoid using eof()
-- add comments
-- no longer assumes two lines of comments.
-- revert change to default value for timestep.initial_step_length
-- make contructer explicit
-- pass reference
A new timestepper that reads timesteps from a file generated using
ecl_summary "DECK" TIME
and applies it to the simulator
Also a parameter timestep.initial_step_length (default 1 day) is added
to controll the frist timestep.
Changes to BlackoilOutputWriter as mandated by the split and rewrite of
opm-output. Notable changes:
* BlackoilOutputWriter is no longer a child class of OutputWriter.
* Minor interface changes; writeTimeStep requires a Wells pointer
* restore requires a Wells* pointer
* VTK/Matlab support rewrites; no longer inherits OutputWriter
* WellStateFullyImplicitBlackoil::report added, to write its data to a
opm-output understood format
Relies on utility/Compat.hpp for quick conversion to the opm-output
defined formats.
This was disabled when the facilities were moved to opm-output.
Now that the simulators are in the opm-simulators module and not
opm-core we can re-enable it.
Have removed the SimulatorState base class, and instead replaced with
the SimulationDatacontainer class from opm-common. The SimulatorState
objects were typcially created with a default constructor, and then
explicitly initialized with a SimulatorState::init() method. For the
SimulationDataContainer RAII is employed; the init( ) has been removed -
and there is no default constructor.
the typo was caused the surface density of the oil phase to be used
instead of the one of gas. This caused the density to be off by a
factor of typically about 900.
using saturated FVFs does not change much, but it does not hurt
because it is also done that way in the simulator.
This makes the defaults for the threshold pressures reasonable again,
but for some reason they are not exactly the same as in the old
implementation. (although the differences are very tolerable.)
On the question why only "Model 2" is affected by this: the other
decks don't use threshold pressures (SPE-X) or do not default any
values (Norne).
the opm-material classes are the ones which are now used by
opm-autodiff and this patch makes it much easier to keep the opm-core
and opm-autodiff results consistent. Also, the opm-material classes
seem to be a bit faster than the opm-core ones (see
https://github.com/OPM/opm-autodiff/pull/576)
I ran the usual array of tests with `flow`: SPE1, SPE3, SPE9 and Norne
all produce the same results at the identical runtime (modulo noise)
and also "Model 2" seems to work.
opm-parser#677 changes the return types for the Deck family of classes.
This patch fixes all broken code from that patch set.
https://github.com/OPM/opm-parser/pull/677
I have doubts if this will change anything in the binaries (and in my
personal opinion, these 'const's look quite ugly and are sometimes a
(small) annoyance when debugging), but I don't mind using the coding
style used by most of the rest of opm-core here.
we're correcting the pressure at the cell center depths to get the
pressure at the face depth, not the other way around. This is
confusing...
thanks to [at]totto82 for discovering this.
This needs to be done if a equilibration region transition is
mentioned by the THPRES keyword, but no value is given for this record
in the third item. (it seems that this is used quite frequently.)
Also, the approach taken by this patch also does not collide with the
restart machinery as far as I can see. This is because the initial
condition is applied by the simulator before the state at the restart
time is loaded. (I interpreted the code that way, but I could be
wrong, could anyone verify this?)
since it is pretty elaborate to calculate initial condition, this
patch is pretty messy. I also do not know if Eclipse does include
capillary pressure in this calculation or not (this patch does). Huge
kudos go to [at]totto82 for reviewing, testing and debugging this.
- use time stepping algorithm pid instead of pid + iter
Adjusting the time-steps on the number of linear iterations does
currently not give any improvents on the time-stepping.
- Change the pid tolerance. The time-stepper will take longer time-steps
and thus reduce the simulation time significantly. The Norne and the SPE
results does not degrade
- Less aggressive reduction of time-steps after convergence problems
Previously there was an assertion whether the time stepping is still
running when querying the time step. After commit 5af794cfd588b this
triggered an assertion for Norne. As there is no reason to limit querying
the current time step in this way. This commit simply removes the assertion
in AdaptiveSimulatorTimer::currentStepLength.
This closesOPM/opm-autodiff#446
In a constructor initialisation list, the order should be the same
as the order in which the variables actually are initialised, which
is given by the order they are declared in the class and not by the
order in the initialisation list.
The only stage where parallelism changes the adaptive time
stepping is when some inner products on the saturation and
pressure are computed.
This commit makes this part parallel by added an additonal boost::any
parameter to the time stepping and the controller. Per default this
is empty. In a parallel run it contains a ParallelIstlInformation object
encapsulating the information about the parallelisation. This then used
to compute the parallel inner product.
-max time step parameter
PIDTimeStepControl --> TimeStepControl:
- added simple iteration count time step control
- bug fix in PIDAndIterationCountTimeStepControl
AdaptiveTimeStepping: apply the above changes.
Note that this patch does not introduce any real temperature
dependence but only changes the APIs for the viscosity and for the
density related methods. Note that I also don't like the fact that
this requires so many changes to so many files, but with the current
design of the property classes I cannot see a way to avoid this...
because the name "currentTime()" can be mistaken for the point in
real-life time at which the simulation is run (e.g. March 11, 2014,
15:07:45.123), the _point_ in time which the simulator timer currently
represents (e.g. Jun 5, 1985, 02:33:12.345) instead of the simulator
time in seconds which elapsed since the START date
(e.g. 52633.345 s).
this rename may lead to some fallout in other modules. I'll
fix them after this PR has been merged...
because the name "currentTime()" can be mistaken for the point in
real-life time at which the simulation is run (e.g. March 11, 2014,
15:07:45.123), the _point_ in time which the simulator timer currently
represents (e.g. Jun 5, 1985, 02:33:12.345) instead of the simulator
time in seconds which elapsed since the START date
(e.g. 52633.345 s).
this rename may lead to some fallout in other modules. I'll
fix them after this PR has been merged...
Since SimulatorTimer is a fairly shallow shim if using the TimeMap, it
can also be removed relatively easily. Having said this, that would
trigger _many_ changes in _a lot_ of places and I'm not motivated at
all to fight that battle as long as the old parser needs to be
supported. I thus decided that the best way is to add a "wrapper mode"
to SimulationTimer...
The step number is zero before the first timestep has been taken, and
one after. The step number is one before the second timestep has been
taken, and two after. This was not clear from the text.