These errors are usually an extra trailing semi-colon, which is easy
to put in there for symmetry with the other lines. Give a warning on
this but proceed with the build as usual afterwards.
An extra semi-colon had snuck in here which caused the project to be
dependent on a module called "", which of course failed further down
the line.
This error was previously undetected because we didn't have anything
that depended on opm-upscaling.
If we are building in a sub-dir to the source tree, guess that a
dependent module is also that, and that the directory that is given
is for the suite, not for the project alone.
Let us specify e.g. --with-dune=/usr, so that the _ROOT variable is
set to e.g. /usr/dune-common, but then backtrack to the parent directory
again if we don't find anything there and search directly in suite dir.
The standard argument handling report which item that failed
(_INCLUDE_DIR, _LIBRARY og HAVE_) but it doesn't provide information
about the root that was used to search for these. Given that we now
automatically search a lot of other places, the _ROOT variables should
be reported before the error is raised.
Instead of having to specify each and every project in a suite such as
DUNE or OPM, allow a root directory for the entire suite to be specified
and then assume that each module is located in a sub-directory to this.
Each individual project can still have its path explicitly specified, if
a special build of those are required.
The suite variable itself will be written to cache by being specified
on the command line. Each individual project directory will NOT be
written to cache, because then we cannot get rid of it by specifying a
new suite directory.
When passing libraries to gcc/ld, the search path and library name must
be specified separately. This is already done when writing a libtool .la
and should be done in the pkg-config .pc file too.
If there is a sibling directory with the name of the module we are
searching for, it is probably part of the same suite, and is the
version we intend to use, before the system version.
This behaviour can be altered with the option SIBLING_SEARCH.
Note, however, that if the sibling is a source tree, it cannot guess
which subdir the build tree is in. You will then probably end up
with headers from the sibling and libraries from the system!
Commit b6cdc06b introduced heuristics to look in the parent directory
for header files alone, while leaving the path for binary files. This
is much better than adjusting the path because one does not potentially
confuse two build directories this way.
If we give an explicit directory path it is because we want a special
version to be used instead of the system version; if there is any
problems with that, we should know up-front instead of silently start
using the system version again!
When writing the variable HAVE_SUPERLU to the config.h file, an empty
string will be interpreted as "not defined". Thus, we can use both
if and ifdef preprocessor directives to check for it. If we use zero,
then we must be careful to always use if, never ifdef.
If the user has given a path to the module, then the system paths should
not be searched, as these may contain an old and outdated version. We
don't necessarily want that just because there was a problem with our
own installation!
If the dependencies have files with relative paths that are ambigious
to our own, we want our version to be the first candidate for inclusion.
This is a variant of the reason for why we always include the build
directory first (and still does).
This change is similar to commit 89be4e14: After find_package_append_to
changed from function to macro to pick up the configuration not only
from the module itself but also from everything it pulled it, the
variable MODULE is overwritten (variable module in lower case is a
parameter, so it is replaced in the source body). Thus, the test in the
end is not whether *this* module was found, but if its last dependency
was! This made the build crash only in some projects but not in others.
Some libraries require more information than what is present in the
xxx-config.cmake file, e.g. the caller must know whether HAVE_TUPLE
is available and probably used when compiling dune-common, and put
this in its own config.h file.
Code to take care of these variables must therefore be in the client
configuration, and this is the same code which is used to handle the
autotools version, namely the find module, so a practical solution
is to just revert to that in both cases.
The previous implementation was a function, which although OK from an
implementation standpoint -- the local variables doesn't pollute the
global namespace -- would not allow variables that were set in indirect
dependencies to bubble up to the main module. This is a problem for
modules which are dependent on configuration variables to be present.
Originally, I added FindSuperLU with uppercase since the variables it
returned had that case. Now the scripts should be patched so that it
searched for uppercase amongst the variables as well, so the module
name can retain its original case (and for compatibility with DUNE)
For compatibility with packages that believes that every variable
should be in uppercase, we try this variant when adding relevant
variables to the project.
A substring can of course not be longer that the full string. Also fixes
problems with CMake-versions that doesn't handle out-of-range parameters
to the SUBSTRING sub-command.
If we are on a 64-bits machine, there is no point in searching lib32
and conversely. Quite the opposite, it can only end badly if a library
is actually found in the wrong architecture directory.
dunecontrol will check for a dune.module file to regard the directory
as containing the module. If we put this is a sub-dir of the source,
it will get confused, so we shouldn't. There shouldn't be any
conflicting use-cases, as one cannot have several modules in the sub-
directory of one source (!?).
Instead of having the name of the module set for each flag, use the
available standard option. Mixing shared objects and static libraries
in the same build is not a very realistic scenario anyway.
This enables us to call configure without actually having a particular
module; the script may then be used on a group level.
This duplicates functionality from the old autotools implementation
in case any user code needs it. It is not necessary to build the
OPM modules themselves.