- use imported target for linking
- use separate damaris cmake script
- handle HAVE_DAMARIS config variable in the usual way
fixing issues when user does not provide an outputDir via a command_line
avoid adding damaris's command lines when we dont have damaris
Damaris initialization is added after InitMpi but before starting the simulation. Damaris will invoke a separate core for writing in
parallel and leave the rest of cores for the simulator. The main changes are in main where start_damaris and then in eclwriterm where
we use damaris to output the PRESSURE. To test Damaris one can use --enable-damaris-output=true and to use parallel HDF5 one can use
--enable-async-damaris-output=true (false is the default choice)
This is only instantiated for two-phase gas/oil and for 3-phase blackoil.
Runtime safeguards have been added to avoid the mistake of running with
a simulator combination that silently ignores DIFFUSE.
Convert the Python opm package from a regular package to a namespace
package such that opm-simulators and opm-common can contribute to the
package from different filesystem paths. In this way, the two packages
opm.simulators and opm.io (in opm-common) can have a different parent
filesystem path.
Kernel files are located in opm/simulators/linalg/bda/opencl/kernels.
CMake will combine them for usage in
${PROJECT_BINARY_DIR}/clSources.cpp that becomes part of the library.
The class ISTLSolverEbos has all features of the removed class, and
is not much more complex. The flow_blackoil_dunecpr is the only
program using it, and is redundant.
this is very convenient during development.
we can then remove the FLOW_BLACKOIL_ONLY option,
as it is no longer needed - use the flow_blackoil binary instead.
however we need to keep this support in Main.hpp due to the python
bindings relying on it.
We still request Standard version 1.2 only.
We need to use KernelFunctor instead of make_kernel.
In addition cl::Sources now works on std::string and
does not support std::pair<const char*, in> anymore.
Unfortunately, we cannot us the imported targets. They add some compile
parameters using generator expressions based on the CXX_COMPILER_ID.
While we are using the system CXX compiler for most of the stuff, some
cuda code is compiled with nvcc which at least for some versions does
not support -Wno-catch-value (which gets passed as normal compiler
option).
There is no AMGCL_INCLUDE_DIRS when using find_package. We now query
the target amgcl::amgcl for INTERFACE_INCLUDE_DIRS and store the
result in AMGCL_INCLUDE_DIRS.
Note that we cannot link amgcl::amgcl target to libopmsimulators as
this sets the -fopenmp flag for all the source files and makes
compilation with nvcc fail.
building a whole simulator for this, and then not even
running a test for it, seems rather excessive. if a test for
index-conformance is wanted, a better approach should be taken.
Fixes:
CMake Error at CMakeLists.txt:458 (target_link_libraries):
The keyword signature for target_link_libraries has already been used with
the target "opmsimulators". All uses of target_link_libraries with a
target must be either all-keyword or all-plain.
The uses of the keyword signature are here:
* /var/lib/jenkins/workspace/opm-common-PR-builder/mpi/install/share/opm/cmake/Modules/OpmCompile.cmake:61 (target_link_libraries)
-- Configuring incomplete, errors occurred!
We actually already require at least CMake 2.8.12 due to the embedded
pybind11 (some tests of it are even at 3.0). Anyway as Ubuntu LTS has
3.10.2 I doubt that anything less is tested by us.
Also, tentative changes to compile the FPGA library from a different module: this part needs to be revised because it assumes a fixed path for the OPM/FPGA module.
Modified CMakeLists_files.cmake to remove files moved to the OPM/FPGA module.
If fmtlib is present on the system we used that one
in the normal mode (not header only). Otherwise we
fallback to the embedded one header only.
Searching for the library is done on opm-common.
Executable is named flow_distribute_z and uses the external
loadbalancing information. It can be used to test the distributed
standard wells on SPE9 with 4 or more processes.
due to bugs in the openmpi on bionic, this test fails to
execute properly in pbuilder environments. instead
of rebuilding openmpi without dynamic loading
(which is the suggested fix) and potentially break users
systems, this is a non-intrusive workaround to be used
for packaging.
also add explicit option for python support to make it
visible in cmake frontends.
Currently the simulator creats the polyhedreal grid from an eclGrid from opm-common
TODO
- make it possible to create the grid directly from DGF or MRST format
- fix issue on norne.
At least on Debian 10 the standard c++ compiler is g++-8,
but CUDA only supports g++-7 and our test for CUDA in cmake
did send an error in that case/combination which is quite
annoying.
The reason was that check_language(CUDA) did not honor
the CMAKE_CUDA_FLAGS variable and always used the default g++7,
but enable_language(CUDA) did honor it. As we do set the underlying
host compiler the fromer reported that CUDA is available while the
latter marked the CUDA compiler as broken.
With this commit we work around this by setting the environment
variable ENV{CUDAHOSTCXX} which nvcc will use. Hence now we only try
to enable CUDA if it is compatible with the C++ compiler
It seems like the VERSION_GREATER_EQUAL operator for boolean
expressions was introduced after CMake 3.6 and hence the current
check whether to activate CUDA or not is broken in version 3.6 and
below.
This PR fixes this by using VERSION_GREATER.
Closes#2375.
We experienced weired linker errors when using host compiler version for
compilation that were not supported by the nvcc used to compile the
cuda code:
```
[ 15%] Linking CXX executable bin/test_timer
/usr/bin/ld: /home/mblatt/src/dune/opm-2.6/opm-common/opm-seq/lib/libopmcommon.a(Parser.cpp.o): in function `Opm::(anonymous namespace)::file& std::vector<Opm::(anonymous namespace)::file, std::allocator<Opm::(anonymous namespace)::file> >::emplace_back<std::filesystem::__cxx11::path&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&>(std::filesystem::__cxx11::path&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&)':
Parser.cpp:(.text+0x1096): undefined reference to `std::filesystem::__cxx11::path::_M_split_cmpts()'
/usr/bin/ld: Parser.cpp:(.text+0x10ad): undefined reference to `std::filesystem::__cxx11::path::_M_split_cmpts()'
/usr/bin/ld: /home/mblatt/src/dune/opm-2.6/opm-common/opm-seq/lib/libopmcommon.a(Parser.cpp.o): in function `Opm::(anonymous namespace)::ParserState::loadFile(std::filesystem::__cxx11::path const&)':
Parser.cpp:(.text+0x23a1): undefined reference to `std::filesystem::canonical(std::filesystem::__cxx11::path const&)'
/usr/bin/ld: Parser.cpp:(.text+0x24e0): undefined reference to
`std::filesystem::__cxx11::path::_M_split_cmpts()'
```
The reason turned out to be that the library path was build up by
paths of the old (g++-7) compiler used by nvcc and the actual (newer) compiler
g++-8. This completely messed up the linker paths for CMake.
To detect this situation already when running cmake we have resorted
to first setting the CMAKE_CUDA_FLAGS to force cmake to make nvcc use
the host compiler and to activate CUDA (if available) before calling
`find_package(CUDA)`. If the host compiler is not supported CMake will
error out during `enable(CUDA)`
Note that we still use (deprecated) FindCUDA later to determine the
libraries to link to.
The users has the option to either deactivate CUDA by setting
`-DCMAKE_DISABLE_FIND_PACKAGE_CUDA=ON` or to use a compiler supported
by nvcc (setting `-DCMAKE_CXX_COMPILER=compiler`).
Additionally we do not try to activate CUDA the CMake version is <
3.8. Please note that previously CMake would have errored out here
anyway since we used the unsupported `enable_language(CUDA)` even in
this case.
Closes#2363.
`mebos` works similarly as `flow`, but in contrast to `flow`, `mebos`
only creates the deck in the common code path whilst the
'EclipseState' and the other higher-level parser objects are always
created internally by the vanguard. this approach avoids code
duplication and the worst effects of parser API creep.
to avoid having to compile non-trivial compile units multiple times,
the actual code of the variants is moved into `ebos_$VARIANT.{hh,cc}`
files and the respective compile units are each put into a small
static library whilst the main function of said libraries are invoked
by either the multiplexed or the respective specialized simulator's
`main()`. This is also somewhat similar of how `flow` works, with the
difference that `mebos` uses the blackoil variant to determine the
parameters it needs to know for parsing the deck instead of
introducing a "fake" type tag for this. The rationale is to reduce
compile time compared to the "fake type tag" approach and -- to a
lesser extend -- avoid unnecessary copy-and-pasting of code. In
particular, this means that for the vast majority of cases, only one
place needs changed in the code for all `ebos` variants if, for
example, the parser API requires further objects in the future.
for some reason, this yields quite different results for norne than
the default variant, e.g. when comparing PRESSURE, we get
```
> compareECL -k PRESSURE -t UNRST ebos/NORNE_ATW2013 ebos_altidx/NORNE_ATW2013 1 1e-4
Comparing 'ebos/NORNE_ATW2013' to 'ebos_altidx/NORNE_ATW2013'.
Comparing PRESSURE...
Occurrence in first file = 9
Occurrence in second file = 9
Value index = 0
(first value, second value) = (254.195, 253.191)
Program threw an exception: [/home/and/src/opm-common/build-cmake/fake-src/examples/test_util/EclRegressionTest.cpp:161] Deviations exceed tolerances.
The absolute deviation is 1.00311, and the tolerance limit is 1.
The relative deviation is 0.00394624, and the tolerance limit is 0.0001.
```
IMO this is a bug, but the reasons for it are currently unknown.
this simply excludes the disabled simulators from `make all` while
`make flow` will continue to work even if the cmake variable
`BUILD_FLOW` was set to `OFF`. This requires a small patch for
opm-common.
these variants should cover most of the common use cases. That said,
there are no plans to provide simulators for combinations of blackoil
extensions or a "multiplexing" simulator like `flow`: If someone is
interested in e.g., an oil-water simulator with polymer and energy
enabled, a separate self-compiled executable should be added locally.
for some reason, libraries produced for a module are not linked to the
executables of the module by the default build system. so far this did
not matter for `ebos` but with this PR, it starts using stuff from
`libopmsimulators`...