Currently, only one IPA environment is tested within Docker
containers. This is not efficient because Azure's agent gives
6 GB of physical memory and 13 GB of total memory (Feb 2020),
but limits CPU with 2 cores.
Next examples are for 'master-only' topologies.
Let's assume that only one member of github repo simultaneously
run CI. This allows to get the full strength of Azure.
Concurrency results for TestInstallMaster:
------------------------------------------
| job concurrency | time/jobs |
------------------------------------------
| 5 | 40/5 |
| 4 | 34/4 |
| 3 | 25/3 |
| 2 | 19/2 |
| 1 | 17/1 |
------------------------------------------
Results prove the limitation of 2 cores. So, in case of jobs'
number not exceeds the max capacity for parallel jobs(10) the
proposed method couldn't save time, but it reduces the used
jobs number up to 2 times. In other words, in this case CI
could pass 2 x tests.
But what if CI was triggered by several PRs? or jobs' number is
bigger than 10. For example, there are 20 tests to be run.
Concurrency results for TestInstallMaster and 20 input jobs:
------------------------------------------------------------------
| job concurrency | time | jobs used | jobs free |
------------------------------------------------------------------
| 5 | 40 | 4 | 6 |
| 4 | 34 | 5 | 5 |
| 3 | 25 | 7 | 3 |
| 2 | 19 | 10 | 0 |
| 1 | 34 | 20 | 0 |
------------------------------------------------------------------
So, in this case the optimal concurrency would be 4 since it
allows to run two CIs simultaneously (20 tasks on board) and get
results in 34 minutes for both. In other words, two people could
trigger CI from PR and don't wait for each other.
New Azure IPA tests workflow:
+ 1) generate-matrix.py script generates JSON from user's YAML [0]
2) Azure generate jobs using Matrix strategy
3) each job is run in parallel (up to 10) within its own VM (Ubuntu-18.04):
a) downloads prepared Docker container image (artifact) from Azure cloud
(built on Build Job) and loads the received image into local pool
+ b) GNU 'parallel' launch each IPA environment in parallel:
+ 1) docker-compose creates the Docker environment having a required number
of replicas and/or clients
+ 2) setup_containers.py script does the needed container's changes (DNS,
SSH, etc.)
+ 3) launch IPA tests on tests' controller
c) publish tests results in JUnit format to provide a comprehensive test
reporting and analytics experience via Azure WebUI [1]
d) publish regular system logs as artifacts
[0]: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml
Fixes: https://pagure.io/freeipa/issue/8202
Signed-off-by: Stanislav Levin <slev@altlinux.org>
Reviewed-By: Alexander Bokovoy <abokovoy@redhat.com>
Azure provides Microsoft-hosted agents having tasty resources [0].
For now (Feb 2020),
- (Linux only) Run steps in a cgroup that offers 6 GB of physical memory and
13 GB of total memory
- Provide at least 10 GB of storage for your source and build outputs.
This is enough to set up IPA environments consisted of not only master but also
replicas and clients and thus, run IPA integration tests.
New Azure IPA tests workflow:
+ 1) Azure generate jobs using Matrix strategy
2) each job is run in parallel (up to 10) within its own VM (Ubuntu-18.04):
a) downloads prepared Docker container image (artifact) from Azure cloud
(built on Build Job) and loads the received image into local pool
+ b) docker-compose creates the Docker environment having a required number
of replicas and/or clients
+ c) setup_containers.py script does the needed container's changes (DNS,
SSH, etc.)
+ d) launch IPA tests on tests' controller
e) publish tests results in JUnit format to provide a comprehensive test
reporting and analytics experience via Azure WebUI [1]
f) publish regular system logs as artifacts
[0] https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops
[1] https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=yaml
Fixes: https://pagure.io/freeipa/issue/8202
Signed-off-by: Stanislav Levin <slev@altlinux.org>
Reviewed-By: Alexander Bokovoy <abokovoy@redhat.com>
The unit tests execution time within Azure Pipelines(AP) is not
balanced. One test job(Base) takes ~13min, while another(XMLRPC)
~28min. Fortunately, AP supports slicing:
> An agent job can be used to run a suite of tests in parallel. For
example, you can run a large suite of 1000 tests on a single agent.
Or, you can use two agents and run 500 tests on each one in parallel.
To leverage slicing, the tasks in the job should be smart enough to
understand the slice they belong to.
>The step that runs the tests in a job needs to know which test slice
should be run. The variables System.JobPositionInPhase and
System.TotalJobsInPhase can be used for this purpose.
Thus, to support this pytest should know how to split the test suite
into groups(slices). For this, a new internal pytest plugin was added.
About plugin.
- Tests within a slice are grouped by test modules because not all of
the tests within the module are independent from each other.
- Slices are balanced by the number of tests within test module.
- To run some module within its own environment there is a dedicated
slice option (could help with extremely slow tests)
Examples.
- To split `test_cmdline` tests into 2 slices and run the first one:
ipa-run-tests --slices=2 --slice-num=1 test_cmdline
- To split tests into 2 slices, then to move one module out to its own slice
and run the second one:
ipa-run-tests --slices=2 --slice-dedicated=test_cmdline/test_cli.py \
--slice-num=2 test_cmdline
Fixes: https://pagure.io/freeipa/issue/8008
Signed-off-by: Stanislav Levin <slev@altlinux.org>
Reviewed-By: Alexander Bokovoy <abokovoy@redhat.com>
By default, the `last` exit code returned from Azure script will be
checked and, if non-zero, treated as a step failure. Luckily,
for Linux script is a shortcut for Bash. Hence errexit/e option
could be applied. But Azure pipelines doesn't set it by default:
https://github.com/microsoft/azure-pipelines-agent/issues/1803
For multiline script this is a problem, unless otherwise designed.
Some of benefits of checking the result of each subcommand:
- preventing subsequent issues (broken packages, container images, etc.)
- time saving (next steps will not run)
- good diagnostics (tells which part of script fails)
Signed-off-by: Stanislav Levin <slev@altlinux.org>
Reviewed-By: Alexander Bokovoy <abokovoy@redhat.com>
Rewrite templates to make test job declarations simpler and easier to
work with.
A test job template can be instantiated this way:
- template: templates/test-jobs.yml
parameters:
jobName: Base
jobTitle: Base tests
testsToRun:
- test_cmdline
- test_install
- test_ipaclient
- test_ipalib
- test_ipaplatform
- test_ipapython
testsToIgnore:
- test_integration
- test_webui
- test_ipapython/test_keyring.py
taskToRun: run-tests
Both 'testsToRun' and 'testsToIgnore' accept arrays of test matches.
Wildcards also supported:
....
testsToRun:
- test_xmlrpc/test_hbac*
....
'taskToRun' specifies a script ipatests/azure/azure-${taskToRun}.sh that
will be executed in the test environment to actually start tests.
Parameters 'testsToRun' and 'testsToIgnore' define TESTS_TO_{RUN,IGNORE}
variables that will be set in the environment of the test script. These
variables will have entries from the parameters separated by a space.
Reviewed-By: Christian Heimes <cheimes@redhat.com>
Sets up a pipeline to run FreeIPA build and tests in Azure Pipelines.
Azure Pipelines provides 10 parallel free runners for open source projects.
Use them to run following jobs:
- Build: build RPMs and Fedora 30 container with them
- Lint: run linting of the source code
- Tox: run py36,pypi,pylint tests using Tox
- Web UI unit tests: run Web UI unit tests with Grunt/QUnit/PhantomJS
- XMLRPC tests: install FreeIPA server and run XMLRPC tests against it
All jobs are running in Fedora 30 containers. Build, Lint, Tox, and Web
UI unit tests run inside f30/fedora-toolbox container. Build job
generates a container with pre-installed FreeIPA packages using official
fedora:30 container. All containers are picked up from
registry.fedoraproject.org.
Artifacts from the build job are pushed to a pipeline storage and reused
in the XMLRPC tests. They also are accessible in the 'Summary' tab to
download.
XUnit and QUnit outputs from the tests that produce it are reported in
the 'Tests' tab.
Logs from individual steps from each job are available for review in
the 'Logs' tab. They also can be downloaded.
Reviewed-By: Christian Heimes <cheimes@redhat.com>
Reviewed-By: Rob Crittenden <rcritten@redhat.com>