Commit Graph

16325 Commits

Author SHA1 Message Date
Arne Morten Kvarving
82ba00b4ba remove accidentially left-over member 2023-08-04 15:34:05 +02:00
Markus Blatt
81c6eac6a7
Merge pull request #4766 from akva2/eclsolution_containers
Add containers for polymer and MICP solution components
2023-08-02 09:33:54 +02:00
Arne Morten Kvarving
eaa3281485 changed: add a container for micp solution components
makes it easy to pass data around to enable some refactoring
2023-08-01 13:45:29 +02:00
Arne Morten Kvarving
841d11efed changed: add a container for polymer solution components
makes it easy to pass data around to enable some refactoring
2023-08-01 13:45:14 +02:00
Atgeirr Flø Rasmussen
2728d30185
Merge pull request #4764 from atgeirr/add-ebos-headers
Add all ebos headers to public header list.
2023-08-01 12:11:14 +02:00
Atgeirr Flø Rasmussen
2ddbfc9519 Add all ebos headers to public header list. 2023-08-01 11:12:21 +02:00
Arne Morten Kvarving
1e6b7f889a
Merge pull request #4761 from hnil/fix_temp_boundary
Fix temp boundary
2023-08-01 09:02:26 +02:00
hnil
f3338ac26a -- fixed error in thermal boundary 2023-07-27 16:30:48 +02:00
hnil
de42e1eb67 -- fixed comment 2023-07-27 16:30:48 +02:00
Atgeirr Flø Rasmussen
840dd9de90
Merge pull request #4752 from hnil/linearsolver_timing
-- added more timing to get better coverage of cpr solver
2023-07-27 15:14:10 +02:00
Markus Blatt
2429a8ad1b
Merge pull request #4756 from blattms/rename-time-step-breakdown
Rename LinearTimeSteppingBreakdown to TimeSteppingBreakdown.
2023-07-26 09:49:28 +02:00
Markus Blatt
f20716eaf3 Rename LinearTimeSteppingBreakdown to TimeSteppingBreakdown. 2023-07-25 15:10:07 +02:00
Atgeirr Flø Rasmussen
d4774cc36e
Merge pull request #4754 from atgeirr/fix-comm-related-regression
Add code path for the no-MPI case.
2023-07-25 13:49:30 +02:00
Atgeirr Flø Rasmussen
7c9d57cc84 Add code path for the no-MPI case. 2023-07-25 13:20:16 +02:00
Atgeirr Flø Rasmussen
0d2d8dfe21
Merge pull request #4734 from atgeirr/add-linear-system-size-printout
Add output of linear system sizes to DBG file.
2023-07-25 10:51:29 +02:00
Atgeirr Flø Rasmussen
30a9e02998 Add output of linear system sizes to DBG file. 2023-07-25 09:43:51 +02:00
Atgeirr Flø Rasmussen
4ad4226fdd
Merge pull request #4753 from blattms/fix-fallout-4750
Don't write out of bounds (fixes fallout from PR #4750)
2023-07-24 21:39:21 +02:00
Markus Blatt
943d84c836 Don't write out of bounds (fixes fallout from PR #4750)
While we never use the data receive, we should still not write beyond
arrays as this may create problems.
2023-07-24 16:04:15 +02:00
hnil
c065d34d0e -- added more timing to get better coverage of amg solver
-- added includes needed
2023-07-24 12:28:08 +02:00
Bård Skaflestad
2aaba39374
Merge pull request #4751 from blattms/debian-patches-gcc-13
Added missing include of cstdint needed by GCC-13
2023-07-24 12:26:00 +02:00
Markus Blatt
313e9540c5 Added missing include of cstdint needed by GCC-13 2023-07-24 10:59:19 +02:00
Markus Blatt
941e4230c1
Merge pull request #4750 from blattms/prevent-mpi-abort
Do a graceful exit instead of MPI_Abort for expected exceptions.
2023-07-20 18:52:47 +02:00
Markus Blatt
118dfdf041 Move MPI process check to *-cpp file. 2023-07-19 14:05:19 +02:00
Markus Blatt
859e00254e [bugfix] Make sure MPI_FInalize is called before the return in main.
A correct MPI program should do that.

As MPI_Finalize is part of the destructor of MainObject we need to
make sure that it's destructor is called before the return statement.
We do that manually  by resetting the unique_ptr that we now use to
store the MainObject,
2023-07-19 13:44:12 +02:00
Markus Blatt
7551229e77 Do a graceful exit instead of MPI_Abort for expected exceptions.
Instead of unconditionally issuing MPI_Abort if we encounter a fatal
exception, we try to test whether all processes have experienced this
exception and if this is the case just terminate nomally with a exit
code that signals an error. We still use MPI_Abort if not all
processes get an exception as this is the only way to make sure that
the program aborts.

This approach also works around issues in some MPI implementations
that might not correctly return the error.

Multiple messages like this are gone now:
```
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[smaug.dr-blatt.de:129359] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[smaug.dr-blatt.de:129359] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
```

Bu we still see something like this:
```
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[35057,1],0]
  Exit code:    1
--------------------------------------------------------------------------
```
2023-07-19 13:44:12 +02:00
Bård Skaflestad
ac6b9b2f34
Merge pull request #4748 from plgbrts/gconprod
Enable items, 11, 12 and 13 of GCONPROD
2023-07-14 21:30:14 +02:00
Paul
ae553787ba Use struct for collecting group limit actions 2023-07-14 11:20:03 +02:00
Paul
1796b3343b improved messages text 2023-07-13 13:15:54 +02:00
Markus Blatt
5b41c3eca8
Merge pull request #4747 from bska/extract-well-group-initialization
Extract Well and Group Initialization
2023-07-13 10:17:01 +02:00
Paul
7302c37b78 Enable items, 11, 12 and 13 of GCONPROD 2023-07-12 20:44:10 +02:00
Bård Skaflestad
8c9682ab7a Split Well and Group Initialization Out to Helper
In preparation of adding support for opening/creating wells or
groups in the middle of a report step.  This is needed if an
ACTIONX block runs something like WELOPEN or WELSPECS/COMPDAT.
2023-07-12 17:23:14 +02:00
Bård Skaflestad
e965f6f27f Prune Unused Well State Parameter
The WellState parameter in setCmodeGroup() became unused when we
split the GroupState out of the WellState in commit e1d117c59f.
2023-07-12 17:23:14 +02:00
Bård Skaflestad
81eee81291
Merge pull request #4746 from blattms/better-message-timestep-breakdown
Improve error message when time step is cut too often/much.
2023-07-12 17:04:17 +02:00
Markus Blatt
fc9b1cccce Improve error message when time step is cut too often/much.
Changes
```
Program threw an exception: [/home/mblatt/src/dune/opm/opm-simulators/opm/simulators/timestepping/AdaptiveTimeSteppingEbos.hpp:586] Solver failed to converge after cutting timestep 11 times.
```
to
```
Simulation aborted: Solver failed to converge after cutting timestep 11 times.
```

Which seems more user friendly.
2023-07-12 16:18:29 +02:00
Bård Skaflestad
9f87301ff4
Merge pull request #4695 from bska/wbp-declare-support
Declare Support for WPAVE/WBPn
2023-07-11 15:06:29 +02:00
Bård Skaflestad
7b880727b5 Declare Support for WPAVE/WBPn
We emit a warning if the model uses connection flag 'ALL', but
continue the run.  This behaviour is still being debated and we
may decide to halt the run in this situation.
2023-07-11 11:29:08 +02:00
Bård Skaflestad
aa988a88a9
Merge pull request #4694 from bska/wbp-compute-values
Hook New WBPn Calculation Up to Well Model
2023-07-11 11:25:28 +02:00
Bård Skaflestad
7f89276fe8 Hook New WBPn Calculation Up to Well Model
This commit activates the support for calculating WBPn summary
result values per well in parallel.  To affect the calculation we
add two new data members in BlackoilWellModelGeneric:

  - conn_idx_map_:
    Maps well's connection index (0..getConnections().size() - 1) to
    connections on current rank.  Its local() connections are
    negative 1 (-1) if the connection is not on current rank, and a
    non-negative value otherwise.  The global() function maps well
    connections on current rank to global connection ID for each
    well.  Effectively the reverse of local().  Finally, the open()
    function maps well connections on current rank to open/flowing
    connections on current rank.  Negative 1 if connection is not
    flowing.

  - wbpCalculationService:
    Parallel collection of WBPn calculation objects that knows how
    to exchange source and result information between all ranks in a
    communicator.  Also handles distributed wells.

We furthermore need a way to compute connection-level fluid mixture
density values.  For the standard well class we add a way to access
the StandardWellConnection's 'perf_densities_' values.  However,
since these are defined for open/flowing connections only, this
means we're not able to fully meet the requirements of the

  WELL/ALL

WPAVE depth correction procedure for standard wells.  The
multi-segmented well type, on the other hand, uses the fluid mixture
density in the associated well segment and is therefore well defined
for ALL connections.  OPEN well connections are supported for both
well types.
2023-07-10 13:42:46 +02:00
Bård Skaflestad
ff9e6ca18a
Merge pull request #4745 from akva2/filtercake_separate_class
FilterCake: put code in separate class
2023-07-07 16:51:23 +02:00
Arne Morten Kvarving
60b92d02eb WellFilterCake: make stateful 2023-07-07 16:08:42 +02:00
Arne Morten Kvarving
dcf8a444fd changed: put calculation of filter cake multiplier in WellFilterCake 2023-07-07 16:08:20 +02:00
Arne Morten Kvarving
aaeedf4091 put updating of FilterCake multiplier in separate method 2023-07-07 16:08:20 +02:00
Arne Morten Kvarving
1e7ca08702 changed: put handling of filtration particle volume in separate class 2023-07-07 16:08:17 +02:00
Bård Skaflestad
ebde4ee308
Merge pull request #4693 from bska/wbp-parallel-calculator
Add Parallel Calculation Support for WBPn/WPAVE
2023-07-07 15:41:06 +02:00
Bård Skaflestad
95d715b807 Add Parallel Calculation Support for WBPn/WPAVE
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class.  This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.

We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
2023-07-07 15:01:05 +02:00
Kai Bao
57532195da
Merge pull request #4669 from steink/explicit_vfp_fallback
Include fallback to explicit vfp lookup for tiny rates - testing
2023-07-07 14:55:56 +02:00
Stein Krogstad
77397b0e28 Add tuning for WGRUPCON regression tests 2023-07-07 13:19:38 +02:00
Stein Krogstad
b1c11f6d88 Move function to WellInterfaceGeneric 2023-07-07 13:13:43 +02:00
Stein Krogstad
252d08f1bd No need to loop over phases here 2023-07-07 13:13:43 +02:00
Stein Krogstad
2f8d210896 Also do explicit fallback for double-interp 2023-07-07 13:13:43 +02:00