Commit Graph

10318 Commits

Author SHA1 Message Date
Arne Morten Kvarving
e0c664d162 BdaBridge: mark some more parameters maybe_unused 2023-08-15 12:18:41 +02:00
Bård Skaflestad
0960494aeb
Merge pull request #4794 from akva2/avoid_segfault_in_cleanup
fixed: avoid segfault in cleanup if simulator has not been set up
2023-08-15 11:57:26 +02:00
Arne Morten Kvarving
19f446a7a5 fixed: avoid segfault in cleanup if simulator has not been set up 2023-08-15 09:51:41 +02:00
Arne Morten Kvarving
92fa9577da consistently use std::size_t 2023-08-15 09:32:10 +02:00
Arne Morten Kvarving
b0f1e5d3f5 move output error log to LogOutputHelper 2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
e2d4bae78d move output of fip reservoir log to LogOutputHelper 2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
daced47301 move output of fip log to LogOutputHelper 2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
38e9b5a100 changed: move helpers for calculation pressure averages to separate compile unit
for reuse purposes
2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
f5985ff02f move output of injection log to LogOutputHelper 2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
c9b703f40d move output of production log to LogOutputHelper 2023-08-14 11:44:32 +02:00
Arne Morten Kvarving
91a4701fa4 added: add dedicated class for output of logs
start by moving output of cumulative logs to the new class
2023-08-14 11:44:32 +02:00
Bård Skaflestad
43acfb6142
Merge pull request #4791 from svenn-t/aquifer_h2store
Enable aquifers with H2STORE
2023-08-12 13:42:16 +02:00
Bård Skaflestad
e59a53820a Bring WellContributions Declaration in Scope
This restores the build on machines which enable the BDA bridge,
but which do not have OpenCL installed.
2023-08-11 17:04:24 +02:00
Svenn Tveit
b84837fc61 Moved water phase check outside loop 2023-08-11 15:34:31 +02:00
Svenn Tveit
8b6a504874 Enable aquifers in H2STORE oil/gas version 2023-08-11 13:23:57 +02:00
Arne Morten Kvarving
0883d46d50 rename ISTLSolverEbosWithGpu to ISTLSolverEbosBda
BDA also includes CPU (amgcl) solvers
2023-08-11 11:00:07 +02:00
Arne Morten Kvarving
896cb8484d added: option to disable the BDA solvers 2023-08-11 11:00:07 +02:00
hnil
07fb18422d hopefully fixed compilation and linking problems with WITHGPU 2023-08-11 11:00:07 +02:00
hnil
63b9b01671 fixed includegards 2023-08-11 11:00:07 +02:00
hnil
68322c06e5 added forgotten GPU versions 2023-08-11 11:00:07 +02:00
hnil
d623695d2a - moded all bda spesific tings to separete class 2023-08-11 11:00:07 +02:00
Atgeirr Flø Rasmussen
1a59c91c51 Silence release-mode warning. 2023-08-09 12:06:20 +02:00
Arne Morten Kvarving
c6f1aa0110
Merge pull request #4765 from hnil/change_poly_alugrid
removed use of hidden private defines for poly and alugrid
2023-08-09 11:15:02 +02:00
hnil
66ff026008 remove use of hidden private defines for poly and alugrid
- fixed polygrid
- renamed executables to include blackoil in name
2023-08-08 15:30:05 +02:00
Kai Bao
c46f60103e adding perf_data comparison in equality operator for SingleWellState 2023-08-08 14:52:49 +02:00
Arne Morten Kvarving
82ba00b4ba remove accidentially left-over member 2023-08-04 15:34:05 +02:00
Atgeirr Flø Rasmussen
840dd9de90
Merge pull request #4752 from hnil/linearsolver_timing
-- added more timing to get better coverage of cpr solver
2023-07-27 15:14:10 +02:00
Markus Blatt
f20716eaf3 Rename LinearTimeSteppingBreakdown to TimeSteppingBreakdown. 2023-07-25 15:10:07 +02:00
Atgeirr Flø Rasmussen
7c9d57cc84 Add code path for the no-MPI case. 2023-07-25 13:20:16 +02:00
Atgeirr Flø Rasmussen
0d2d8dfe21
Merge pull request #4734 from atgeirr/add-linear-system-size-printout
Add output of linear system sizes to DBG file.
2023-07-25 10:51:29 +02:00
Atgeirr Flø Rasmussen
30a9e02998 Add output of linear system sizes to DBG file. 2023-07-25 09:43:51 +02:00
Markus Blatt
943d84c836 Don't write out of bounds (fixes fallout from PR #4750)
While we never use the data receive, we should still not write beyond
arrays as this may create problems.
2023-07-24 16:04:15 +02:00
hnil
c065d34d0e -- added more timing to get better coverage of amg solver
-- added includes needed
2023-07-24 12:28:08 +02:00
Markus Blatt
118dfdf041 Move MPI process check to *-cpp file. 2023-07-19 14:05:19 +02:00
Markus Blatt
7551229e77 Do a graceful exit instead of MPI_Abort for expected exceptions.
Instead of unconditionally issuing MPI_Abort if we encounter a fatal
exception, we try to test whether all processes have experienced this
exception and if this is the case just terminate nomally with a exit
code that signals an error. We still use MPI_Abort if not all
processes get an exception as this is the only way to make sure that
the program aborts.

This approach also works around issues in some MPI implementations
that might not correctly return the error.

Multiple messages like this are gone now:
```
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[smaug.dr-blatt.de:129359] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[smaug.dr-blatt.de:129359] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
```

Bu we still see something like this:
```
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[35057,1],0]
  Exit code:    1
--------------------------------------------------------------------------
```
2023-07-19 13:44:12 +02:00
Bård Skaflestad
ac6b9b2f34
Merge pull request #4748 from plgbrts/gconprod
Enable items, 11, 12 and 13 of GCONPROD
2023-07-14 21:30:14 +02:00
Paul
ae553787ba Use struct for collecting group limit actions 2023-07-14 11:20:03 +02:00
Paul
1796b3343b improved messages text 2023-07-13 13:15:54 +02:00
Paul
7302c37b78 Enable items, 11, 12 and 13 of GCONPROD 2023-07-12 20:44:10 +02:00
Bård Skaflestad
8c9682ab7a Split Well and Group Initialization Out to Helper
In preparation of adding support for opening/creating wells or
groups in the middle of a report step.  This is needed if an
ACTIONX block runs something like WELOPEN or WELSPECS/COMPDAT.
2023-07-12 17:23:14 +02:00
Bård Skaflestad
e965f6f27f Prune Unused Well State Parameter
The WellState parameter in setCmodeGroup() became unused when we
split the GroupState out of the WellState in commit e1d117c59f.
2023-07-12 17:23:14 +02:00
Markus Blatt
fc9b1cccce Improve error message when time step is cut too often/much.
Changes
```
Program threw an exception: [/home/mblatt/src/dune/opm/opm-simulators/opm/simulators/timestepping/AdaptiveTimeSteppingEbos.hpp:586] Solver failed to converge after cutting timestep 11 times.
```
to
```
Simulation aborted: Solver failed to converge after cutting timestep 11 times.
```

Which seems more user friendly.
2023-07-12 16:18:29 +02:00
Bård Skaflestad
7b880727b5 Declare Support for WPAVE/WBPn
We emit a warning if the model uses connection flag 'ALL', but
continue the run.  This behaviour is still being debated and we
may decide to halt the run in this situation.
2023-07-11 11:29:08 +02:00
Bård Skaflestad
7f89276fe8 Hook New WBPn Calculation Up to Well Model
This commit activates the support for calculating WBPn summary
result values per well in parallel.  To affect the calculation we
add two new data members in BlackoilWellModelGeneric:

  - conn_idx_map_:
    Maps well's connection index (0..getConnections().size() - 1) to
    connections on current rank.  Its local() connections are
    negative 1 (-1) if the connection is not on current rank, and a
    non-negative value otherwise.  The global() function maps well
    connections on current rank to global connection ID for each
    well.  Effectively the reverse of local().  Finally, the open()
    function maps well connections on current rank to open/flowing
    connections on current rank.  Negative 1 if connection is not
    flowing.

  - wbpCalculationService:
    Parallel collection of WBPn calculation objects that knows how
    to exchange source and result information between all ranks in a
    communicator.  Also handles distributed wells.

We furthermore need a way to compute connection-level fluid mixture
density values.  For the standard well class we add a way to access
the StandardWellConnection's 'perf_densities_' values.  However,
since these are defined for open/flowing connections only, this
means we're not able to fully meet the requirements of the

  WELL/ALL

WPAVE depth correction procedure for standard wells.  The
multi-segmented well type, on the other hand, uses the fluid mixture
density in the associated well segment and is therefore well defined
for ALL connections.  OPEN well connections are supported for both
well types.
2023-07-10 13:42:46 +02:00
Bård Skaflestad
ff9e6ca18a
Merge pull request #4745 from akva2/filtercake_separate_class
FilterCake: put code in separate class
2023-07-07 16:51:23 +02:00
Arne Morten Kvarving
60b92d02eb WellFilterCake: make stateful 2023-07-07 16:08:42 +02:00
Arne Morten Kvarving
dcf8a444fd changed: put calculation of filter cake multiplier in WellFilterCake 2023-07-07 16:08:20 +02:00
Arne Morten Kvarving
aaeedf4091 put updating of FilterCake multiplier in separate method 2023-07-07 16:08:20 +02:00
Arne Morten Kvarving
1e7ca08702 changed: put handling of filtration particle volume in separate class 2023-07-07 16:08:17 +02:00
Bård Skaflestad
95d715b807 Add Parallel Calculation Support for WBPn/WPAVE
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class.  This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.

We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
2023-07-07 15:01:05 +02:00