We still request Standard version 1.2 only.
We need to use KernelFunctor instead of make_kernel.
In addition cl::Sources now works on std::string and
does not support std::pair<const char*, in> anymore.
Unfortunately, we cannot us the imported targets. They add some compile
parameters using generator expressions based on the CXX_COMPILER_ID.
While we are using the system CXX compiler for most of the stuff, some
cuda code is compiled with nvcc which at least for some versions does
not support -Wno-catch-value (which gets passed as normal compiler
option).
There is no AMGCL_INCLUDE_DIRS when using find_package. We now query
the target amgcl::amgcl for INTERFACE_INCLUDE_DIRS and store the
result in AMGCL_INCLUDE_DIRS.
Note that we cannot link amgcl::amgcl target to libopmsimulators as
this sets the -fopenmp flag for all the source files and makes
compilation with nvcc fail.
building a whole simulator for this, and then not even
running a test for it, seems rather excessive. if a test for
index-conformance is wanted, a better approach should be taken.
Fixes:
CMake Error at CMakeLists.txt:458 (target_link_libraries):
The keyword signature for target_link_libraries has already been used with
the target "opmsimulators". All uses of target_link_libraries with a
target must be either all-keyword or all-plain.
The uses of the keyword signature are here:
* /var/lib/jenkins/workspace/opm-common-PR-builder/mpi/install/share/opm/cmake/Modules/OpmCompile.cmake:61 (target_link_libraries)
-- Configuring incomplete, errors occurred!
We actually already require at least CMake 2.8.12 due to the embedded
pybind11 (some tests of it are even at 3.0). Anyway as Ubuntu LTS has
3.10.2 I doubt that anything less is tested by us.
Also, tentative changes to compile the FPGA library from a different module: this part needs to be revised because it assumes a fixed path for the OPM/FPGA module.
Modified CMakeLists_files.cmake to remove files moved to the OPM/FPGA module.
If fmtlib is present on the system we used that one
in the normal mode (not header only). Otherwise we
fallback to the embedded one header only.
Searching for the library is done on opm-common.
Executable is named flow_distribute_z and uses the external
loadbalancing information. It can be used to test the distributed
standard wells on SPE9 with 4 or more processes.
due to bugs in the openmpi on bionic, this test fails to
execute properly in pbuilder environments. instead
of rebuilding openmpi without dynamic loading
(which is the suggested fix) and potentially break users
systems, this is a non-intrusive workaround to be used
for packaging.
also add explicit option for python support to make it
visible in cmake frontends.
Currently the simulator creats the polyhedreal grid from an eclGrid from opm-common
TODO
- make it possible to create the grid directly from DGF or MRST format
- fix issue on norne.
At least on Debian 10 the standard c++ compiler is g++-8,
but CUDA only supports g++-7 and our test for CUDA in cmake
did send an error in that case/combination which is quite
annoying.
The reason was that check_language(CUDA) did not honor
the CMAKE_CUDA_FLAGS variable and always used the default g++7,
but enable_language(CUDA) did honor it. As we do set the underlying
host compiler the fromer reported that CUDA is available while the
latter marked the CUDA compiler as broken.
With this commit we work around this by setting the environment
variable ENV{CUDAHOSTCXX} which nvcc will use. Hence now we only try
to enable CUDA if it is compatible with the C++ compiler
It seems like the VERSION_GREATER_EQUAL operator for boolean
expressions was introduced after CMake 3.6 and hence the current
check whether to activate CUDA or not is broken in version 3.6 and
below.
This PR fixes this by using VERSION_GREATER.
Closes#2375.
We experienced weired linker errors when using host compiler version for
compilation that were not supported by the nvcc used to compile the
cuda code:
```
[ 15%] Linking CXX executable bin/test_timer
/usr/bin/ld: /home/mblatt/src/dune/opm-2.6/opm-common/opm-seq/lib/libopmcommon.a(Parser.cpp.o): in function `Opm::(anonymous namespace)::file& std::vector<Opm::(anonymous namespace)::file, std::allocator<Opm::(anonymous namespace)::file> >::emplace_back<std::filesystem::__cxx11::path&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&>(std::filesystem::__cxx11::path&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&)':
Parser.cpp:(.text+0x1096): undefined reference to `std::filesystem::__cxx11::path::_M_split_cmpts()'
/usr/bin/ld: Parser.cpp:(.text+0x10ad): undefined reference to `std::filesystem::__cxx11::path::_M_split_cmpts()'
/usr/bin/ld: /home/mblatt/src/dune/opm-2.6/opm-common/opm-seq/lib/libopmcommon.a(Parser.cpp.o): in function `Opm::(anonymous namespace)::ParserState::loadFile(std::filesystem::__cxx11::path const&)':
Parser.cpp:(.text+0x23a1): undefined reference to `std::filesystem::canonical(std::filesystem::__cxx11::path const&)'
/usr/bin/ld: Parser.cpp:(.text+0x24e0): undefined reference to
`std::filesystem::__cxx11::path::_M_split_cmpts()'
```
The reason turned out to be that the library path was build up by
paths of the old (g++-7) compiler used by nvcc and the actual (newer) compiler
g++-8. This completely messed up the linker paths for CMake.
To detect this situation already when running cmake we have resorted
to first setting the CMAKE_CUDA_FLAGS to force cmake to make nvcc use
the host compiler and to activate CUDA (if available) before calling
`find_package(CUDA)`. If the host compiler is not supported CMake will
error out during `enable(CUDA)`
Note that we still use (deprecated) FindCUDA later to determine the
libraries to link to.
The users has the option to either deactivate CUDA by setting
`-DCMAKE_DISABLE_FIND_PACKAGE_CUDA=ON` or to use a compiler supported
by nvcc (setting `-DCMAKE_CXX_COMPILER=compiler`).
Additionally we do not try to activate CUDA the CMake version is <
3.8. Please note that previously CMake would have errored out here
anyway since we used the unsupported `enable_language(CUDA)` even in
this case.
Closes#2363.
`mebos` works similarly as `flow`, but in contrast to `flow`, `mebos`
only creates the deck in the common code path whilst the
'EclipseState' and the other higher-level parser objects are always
created internally by the vanguard. this approach avoids code
duplication and the worst effects of parser API creep.
to avoid having to compile non-trivial compile units multiple times,
the actual code of the variants is moved into `ebos_$VARIANT.{hh,cc}`
files and the respective compile units are each put into a small
static library whilst the main function of said libraries are invoked
by either the multiplexed or the respective specialized simulator's
`main()`. This is also somewhat similar of how `flow` works, with the
difference that `mebos` uses the blackoil variant to determine the
parameters it needs to know for parsing the deck instead of
introducing a "fake" type tag for this. The rationale is to reduce
compile time compared to the "fake type tag" approach and -- to a
lesser extend -- avoid unnecessary copy-and-pasting of code. In
particular, this means that for the vast majority of cases, only one
place needs changed in the code for all `ebos` variants if, for
example, the parser API requires further objects in the future.
for some reason, this yields quite different results for norne than
the default variant, e.g. when comparing PRESSURE, we get
```
> compareECL -k PRESSURE -t UNRST ebos/NORNE_ATW2013 ebos_altidx/NORNE_ATW2013 1 1e-4
Comparing 'ebos/NORNE_ATW2013' to 'ebos_altidx/NORNE_ATW2013'.
Comparing PRESSURE...
Occurrence in first file = 9
Occurrence in second file = 9
Value index = 0
(first value, second value) = (254.195, 253.191)
Program threw an exception: [/home/and/src/opm-common/build-cmake/fake-src/examples/test_util/EclRegressionTest.cpp:161] Deviations exceed tolerances.
The absolute deviation is 1.00311, and the tolerance limit is 1.
The relative deviation is 0.00394624, and the tolerance limit is 0.0001.
```
IMO this is a bug, but the reasons for it are currently unknown.
this simply excludes the disabled simulators from `make all` while
`make flow` will continue to work even if the cmake variable
`BUILD_FLOW` was set to `OFF`. This requires a small patch for
opm-common.
these variants should cover most of the common use cases. That said,
there are no plans to provide simulators for combinations of blackoil
extensions or a "multiplexing" simulator like `flow`: If someone is
interested in e.g., an oil-water simulator with polymer and energy
enabled, a separate self-compiled executable should be added locally.
for some reason, libraries produced for a module are not linked to the
executables of the module by the default build system. so far this did
not matter for `ebos` but with this PR, it starts using stuff from
`libopmsimulators`...
so far, the actual specializations of the simulator were compiled into
the `libopmsimulators` library and the build of the glue code
(`flow.cpp`) thus needed to be deferred until the library was fully
built. Since the compilation of the glue code requires a full property
hierarchy for handling command line parameters, this arrangement
significantly increases the build time for systems with a sufficient
number of parallel build processes. ("sufficient" here means 8 or more
threads, i.e., a quadcore system with hyperthreading is sufficient
provided that it has enough main memory.)
the new approach is not to include these objects in
`libopmsimulators`, but to directly deal with them in the `flow`
binary. this allows all of them and the glue code to be compiled in
parallel.
compilation time on my machine before this change:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
Scanning dependencies of target opmsimulators
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_gasoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_blackoil.cpp.o
[ 2%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_solvent.cpp.o
[ 4%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_polymer.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_energy.cpp.o
[ 6%] Building CXX object CMakeFiles/opmsimulators.dir/opm/simulators/flow_ebos_oilwater_polymer.cpp.o
[ 6%] Linking CXX static library lib/libopmsimulators.a
[ 97%] Built target opmsimulators
Scanning dependencies of target flow
[100%] Building CXX object CMakeFiles/flow.dir/examples/flow.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m45.692s
user 8m47.195s
sys 0m11.533s
```
after:
```
> touch ../opm/autodiff/BlackoilModelEbos.hpp; time make -j32 flow 2> /dev/null
[ 91%] Built target opmsimulators
Scanning dependencies of target flow
[ 93%] Building CXX object CMakeFiles/flow.dir/flow/flow.cpp.o
[ 95%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_gasoil.cpp.o
[ 97%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_polymer.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_oilwater.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_solvent.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_blackoil.cpp.o
[100%] Building CXX object CMakeFiles/flow.dir/flow/flow_ebos_energy.cpp.o
[100%] Linking CXX executable bin/flow
[100%] Built target flow
real 1m21.597s
user 8m49.476s
sys 0m10.973s
```
(this corresponds to a ~20% reduction of the time spend on waiting for
the compiler.)
Previously, QUIET was passed to opm-common which would
not even print a message if there is a problem. Now cmake
will fail if OPM_COMMON_ROOT is not set, and neither guessing
opm_common_DIR did work nor searching in the default locations.
If sibling search is activated and opm-common_DIR is not set, then
we try to determine the build directory layout from ${PROJECT_BINARY_DIR}.
The following two possibilities are supported:
+ <modules-build-dir>/<project-name>
+ <project-name>/<build-dir>
where <project-name> is the case sensitive module name (in this case
opm-common).
This results in the following search precedence:
1. User set opm-common_DIR
2. sibling directories search (if the directories exist)
3. Default (system) CMake search path
Somehow this was never called in the OPM modules. But the CMake documentation
actually says:
"The top-level ``CMakeLists.txt`` file for a project must contain a
literal, direct call to the ``project()`` command; loading one
through the ``include()`` command is not sufficient. If no such
call exists CMake will implicitly add one to the top that enables the
default languages (``C`` and ``CXX``).
"
Without it some variables (like CMAKE_PROJECT_NAME) are not correctly defined.
`flow_ebos` should be now capable of doing everything that
`flow_legacy` can and recently all testing seems to have been centered
on `flow_ebos`. note that `flow_ebos` does not yet support some more
advanced/exotic features like MPI, solvents, polymer, etc., but
neither does the plain `flow_legacy` binary. until `flow_ebos`
supports these features, the specialized simulators thus continue to
use the legacy code paths.
* origin/master:
Do not throw for unrecognized file when merging log files.
Do not populate cellData but issue a warning in parallel.
Removed ternary operator in inline initialization.
Correctly mark transfer of ownership for ouptut writer
Indent nested #if
Remove Solution.sdc assignment
Cater variable name change in BCRSMatrix of DUNE 2.5
Fix using local active cells for writing eclipse files in parallel.
add restart test for SPE1CASE2_ACTNUM
rename the 'flow' binary to 'flow_legacy' and set a symbolic link
Added ctest for restart files
it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)