Compare commits

..

414 Commits

Author SHA1 Message Date
Julien Fontanet
d274c34b6b fix(xapi): support ISO 8601 datetime format
Fixes #6082
2022-01-13 15:23:59 +01:00
Florent BEAUCHAMP
b78a946458 feat(proxy): implement reverse proxies (#6072) 2022-01-13 14:54:10 +01:00
Julien Fontanet
e8a5694d51 feat(backups/_cleanVm): clean orphan mergeState (#6087)
Fixes zammad#4778
2022-01-13 10:41:39 +01:00
Julien Fontanet
514fa72ee2 fix(package.json/jest): vhd-lib no longer has a build step
Introduced by 3a74c71f1
2022-01-12 22:50:49 +01:00
Julien Fontanet
e9ca13aa12 fix(backups/cleanVm): handle zstd-compressed XVAs
Related to zammad#4300
2022-01-12 11:31:09 +01:00
Julien Fontanet
57f1ec6716 chore(backups/_cleanVm/listVhds): make vhds directly a Set 2022-01-11 15:31:56 +01:00
Julien Fontanet
02e32cc9b9 chore(backups/_cleanVm/listVhds): minor simplification
This also removes the incorrect handling of an optional dir in `INTERRUPTED_VHDS_REG`.
2022-01-11 15:09:18 +01:00
Julien Fontanet
902abd5d94 chore: update deps 2022-01-06 13:59:31 +01:00
Julien Fontanet
53380802ec feat(xo-server): limit VM migration concurrency (#6076)
Related to #6065
2022-01-06 09:32:42 +01:00
Julien Fontanet
af5d8d02b6 feat: release 5.66.2 2022-01-05 11:30:29 +01:00
Julien Fontanet
7abba76f03 feat(CHANGELOG): integrate released changes 2022-01-05 10:36:05 +01:00
Julien Fontanet
79b22057d9 feat(xo-web): 5.91.2 2022-01-05 10:34:30 +01:00
Julien Fontanet
366daef718 feat(xo-server): 5.86.3 2022-01-05 10:33:30 +01:00
Julien Fontanet
a5ff0ba799 feat(@xen-orchestra/proxy): 0.17.3 2022-01-05 10:32:42 +01:00
Julien Fontanet
c2c6febb88 feat(@xen-orchestra/backups): 0.18.3 2022-01-05 10:18:02 +01:00
Julien Fontanet
f119c72a7f feat(xo-vmdk-to-vhd): 2.0.3 2022-01-05 10:16:47 +01:00
Julien Fontanet
8aee897d23 feat(vhd-lib): 3.0.0 2022-01-05 10:15:45 +01:00
Florent BEAUCHAMP
729db5c662 fix(backups): race condition in checkBaseVdi preventing delta backup (#6075)
Fixes zammad#4751, zammad#4729, zammad#4665 and zammad#4300
2022-01-05 09:58:06 +01:00
Julien Fontanet
61c46df7bf chore(xo-server): dont pass (unused) httpServer to app 2022-01-03 16:04:18 +01:00
Julien Fontanet
9b1a04338d chore(xo-server): attach express before creating app 2022-01-03 15:46:30 +01:00
Julien Fontanet
d307134d22 chore(xapi/_assertHealthyVdiChain): clearer warnings in case of missing VDI 2021-12-28 18:14:32 +01:00
Julien Fontanet
5bc44363f9 fix(xo-vmdk-to-vhd): fix createReadableSparseStream import
Introduced by 3a74c71f1

Fixes #6068
2021-12-23 23:40:58 +01:00
Julien Fontanet
68c4fac3ab chore: update deps 2021-12-23 13:25:48 +01:00
Julien Fontanet
6ad9245019 feat: release 5.66.1 2021-12-23 13:25:08 +01:00
Julien Fontanet
763cf771fb feat(CHANGELOG): integrate released changes 2021-12-23 12:18:50 +01:00
Julien Fontanet
3160b08637 feat(xo-web): 5.91.1 2021-12-23 12:18:14 +01:00
Julien Fontanet
f8949958a3 feat(xo-server): 5.86.2 2021-12-23 12:17:54 +01:00
Julien Fontanet
8b7ac07d2d feat(@xen-orchestra/proxy): 0.17.2 2021-12-23 12:17:25 +01:00
Julien Fontanet
044df9adba feat(@xen-orchestra/backups): 0.18.2 2021-12-23 12:16:53 +01:00
Julien Fontanet
040139f4cc fix(backups/cleanVm): computeVhdSize can return undefined 2021-12-23 12:09:11 +01:00
Julien Fontanet
7b73bb9df0 chore: format with Prettier 2021-12-23 12:06:11 +01:00
Julien Fontanet
24c8370daa fix(xo-server-test): add missing ESLint config 2021-12-23 11:58:14 +01:00
Julien Fontanet
029c4921d7 fix(backups/RemoteAdapter#isMergeableParent): #useVhdDirectory is a function (#6070)
Fixes zammad#4646
Fixes https://xcp-ng.org/forum/topic/5371/delta-backup-changes-in-5-66

Introduced by 5d605d1bd
2021-12-23 11:57:51 +01:00
Julien Fontanet
3a74c71f1a chore(vhd-lib): remove build step
BREAKING:
- removes `dist/` in the path of sub-modules
- requires Node >=12
2021-12-23 10:31:29 +01:00
Julien Fontanet
6022a1bbaa feat(normalize-packages): delete unused Babel configs 2021-12-23 09:26:00 +01:00
Julien Fontanet
4e88c993f7 chore: update dev deps 2021-12-22 11:07:25 +01:00
Julien Fontanet
c9a61f467c fix(xo-web/Dashboard/Health): handle no default_SR
Fixes zammad#4640

Introduced by 7bacd781c
2021-12-22 10:33:18 +01:00
Julien Fontanet
e6a5f42f63 feat: release 5.66.0 2021-12-21 18:00:39 +01:00
Julien Fontanet
a373823eea feat(xo-server): 5.86.1 2021-12-21 17:58:02 +01:00
Julien Fontanet
b5e010eac8 feat(@xen-orchestra/proxy): 0.17.1 2021-12-21 17:57:47 +01:00
Julien Fontanet
50ffe58655 feat(@xen-orchestra/backups): 0.18.1 2021-12-21 17:56:55 +01:00
Julien Fontanet
07eb3b59b3 feat(@xen-orchestra/mixins): 0.1.2 2021-12-21 17:56:52 +01:00
Julien Fontanet
5177b5e142 chore(backups/RemoteAdapter): remove default value for vhdDirectoryCompression
Introduced by 3c984e21c
2021-12-21 17:51:23 +01:00
Julien Fontanet
3c984e21cd fix({proxy,xo-server}): add backup.vhdDirectoryCompression setting
Introduced by 5d605d1bd
2021-12-21 17:49:43 +01:00
Julien Fontanet
aa2b27e22b fix(mixins/Config#get): fix missing entry error message 2021-12-21 17:37:07 +01:00
Julien Fontanet
14a7f00c90 chore(CHANGELOG): remove non-breakable spaces 2021-12-21 17:31:51 +01:00
Julien Fontanet
56f98601bd feat(CHANGELOG): integrate released changes 2021-12-21 17:24:19 +01:00
Julien Fontanet
027a8c675e feat(@xen-orchestra/proxy): 0.17.0 2021-12-21 17:22:29 +01:00
Julien Fontanet
bdaba9a767 feat(xo-server): 5.86.0 2021-12-21 17:22:07 +01:00
Julien Fontanet
4e9090f60d feat(@xen-orchestra/backups): 0.18.0 2021-12-21 17:21:37 +01:00
Julien Fontanet
73b445d371 feat(xo-vmdk-to-vhd): 2.0.2 2021-12-21 17:21:10 +01:00
Julien Fontanet
75bfc283af feat(vhd-lib): 2.1.0 2021-12-21 17:20:36 +01:00
Julien Fontanet
727de19b89 feat(@xen-orchestra/xapi): 0.8.5 2021-12-21 17:20:06 +01:00
Florent BEAUCHAMP
5d605d1bd7 feat(backups): compress VHDs on S3 (#5932) 2021-12-21 17:18:27 +01:00
Julien Fontanet
ffdd1dfd6f fix(xo-vmdk-to-vhd): avoid requiring whole vhd-lib
This library is used in the browser and a lot of parts of `vhd-lib` are not intended to be used in (or bundled for) the browser.
2021-12-21 17:10:33 +01:00
Julien Fontanet
d45418eb29 fix(backups/cleanVm): metadata.vhds is an object, not an array
Introduced by 93069159d
2021-12-21 16:23:03 +01:00
Julien Fontanet
6ccc9d1ade fix(xapi/VM_create): support NVRAM field (#6062)
Fixes #6054
Fixes https://xcp-ng.org/forum/topic/5319/bug-uefi-boot-parameters-not-preserved-with-delta-backups
2021-12-20 16:30:41 +01:00
Florent BEAUCHAMP
93069159dd fix(backups/cleanVm): don't warn on size change due to merged VHDs (#6010) 2021-12-20 14:57:54 +01:00
Julien Fontanet
8c4780131f feat: release 5.65.3 2021-12-20 10:50:51 +01:00
Julien Fontanet
02ae8bceda fix(backups/cleanVm): dont fail on broken metadata 2021-12-20 09:49:27 +01:00
Julien Fontanet
bb10bbc945 chore(backups/cleanVm): remove deleted files from jsons 2021-12-20 09:46:09 +01:00
Florent BEAUCHAMP
478d88e97f fix(fs/s3#_rmtree): infinite loop (#6067) 2021-12-17 16:01:57 +01:00
Florent BEAUCHAMP
6fb397a729 fix(vhd-lib): parseVhdStream int overflow when rebuilding the bat (#6066)
BAT should contain sector address, not byte address

We were not really rebuilding the BAT, since we were using the data read in the old bat and write it as is in the new one
2021-12-17 14:28:48 +01:00
Julien Fontanet
18dae34778 feat(vhd-lib/parseVhdStream): new public method (#6063)
Extracted from `createVhdDirectoryFromStream`

Co-authored-by: Florent Beauchamp <flo850@free.fr>
2021-12-17 10:08:29 +01:00
Julien Fontanet
243566e936 fix(xen-api): named import for @vates/coalesce-calls
Introduced by 87f4fd675
2021-12-16 14:00:49 +01:00
Julien Fontanet
87f4fd675d fix(xen-api): fix coalesceCalls
Introduced by dec6b59a9
2021-12-16 13:26:31 +01:00
Julien Fontanet
dec6b59a9f chore(xen-api): use @vates/coalesce-calls 2021-12-16 12:03:07 +01:00
Rajaa.BARHTAOUI
e51baedf7f feat: technical release (#6060) 2021-12-16 12:01:57 +01:00
Julien Fontanet
530da14e24 feat(@vates/decorate-with): 1.0.0 2021-12-16 11:49:29 +01:00
Julien Fontanet
02da7c272f feat(decorate-with): perInstance helper 2021-12-16 11:48:48 +01:00
Pierre Donias
a07c5418e9 feat(xo-server,xo-web): disable HA during Rolling Pool Update (#6057)
See #5711
2021-12-16 10:29:13 +01:00
Mathieu
c080db814b feat(xo-web/home/backed up VMs): filter out VMs in disabled backup jobs (#6037)
See xoa-support#4294
2021-12-16 10:06:45 +01:00
Julien Fontanet
3abe13c006 chore(backups/RemoteAdapter#deleteVmBackups): report unsupported backup modes
It was removed in 7e302fd1c
2021-12-16 10:05:08 +01:00
Julien Fontanet
fb331c0a2c fix(backups/RemoteAdapter#deleteVmBackups): dont delete undefined
Fixes https://xcp-ng.org/forum/topic/5331/backup-smart-mode-broken/6
Introduced by 7e302fd1c
2021-12-16 10:03:16 +01:00
Julien Fontanet
19ea78afc5 fix(xo-server): fix job matching for smart mode
Fixes https://xcp-ng.org/forum/topic/5331/backup-smart-mode-broken
Fixes #6058

Introduced by cf9f0da6e

XO VM objects have a `other` field instead of `other_config`.
2021-12-15 23:25:04 +01:00
Julien Fontanet
2096c782e3 feat(xo-server/api): new method backupNg.deleteVmBackups
Related to 7e302fd1c
2021-12-15 17:36:47 +01:00
Julien Fontanet
79a6a8a10c feat(proxy/api): new method backup.deleteVmBackups
Related to 7e302fd1c
2021-12-15 17:27:08 +01:00
Julien Fontanet
5a933bad93 fix(vhd-lib/merge): dont fail on invalid state file
Fixes zammad#4227
2021-12-15 16:36:18 +01:00
Julien Fontanet
7e302fd1cb feat(backups/RemoteAdapter): new method deleteVmBackups()
It's usually best to delete multiple backups at once instead of one by one because it allows some optimizations, for instance when merging unused VHDs.

This was already possible in private methods but not exposed in the public API.
2021-12-15 16:34:35 +01:00
Julien Fontanet
cf9f0da6e5 fix(backups): ignore VMs created by current job
See xoa-support#4271
2021-12-14 12:07:51 +01:00
Pierre Donias
10ac23e265 feat(docs/Netbox): specify minimum required permissions (#6047)
See https://xcp-ng.org/forum/topic/5300/
2021-12-14 11:48:19 +01:00
Rajaa.BARHTAOUI
dc2e1cba1f feat(xo-web/pool,VM/advanced): ability to set suspend SR (#6044)
Fixes #4163
2021-12-14 10:13:11 +01:00
Julien Fontanet
7bfd190c22 fix(backups/_VmBackup): no base VM when no base VDIs found
Introduced by 5b188f35b
2021-12-13 17:55:54 +01:00
Manon Mercier
c3bafeb468 fix(docs/xoa): must reboot after changing password (#6056)
Co-authored-by: Jon Sands <fohdeesha@gmail.com>
2021-12-13 17:48:14 +01:00
Mathieu
7bacd781cf feat(xo-web/health): display non shared default SRs (#6033)
Fixes #5871
2021-12-13 10:00:24 +01:00
Julien Fontanet
ee005c3679 fix(xo-server-usage-report): csv-stringify usage
Fixes #6053

Introduced by b179dc1d5
2021-12-13 09:39:00 +01:00
Florent BEAUCHAMP
315f54497a fix(vhd-lib/resolveAlias): limit size (#6004)
Current resolution loads whole file in memory. It can lead to crash is the alias is malformed (for example a full VHD named `.alias.vhd`).
2021-12-12 14:19:40 +01:00
Julien Fontanet
e30233347b feat: release 5.65.2 2021-12-10 17:19:54 +01:00
Julien Fontanet
56d4a7f01e fix(xo-web/about): show commit iff available & always versions
Fixes #6052
2021-12-10 16:40:26 +01:00
Julien Fontanet
7c110eebd8 feat(CHANGELOG): integrate released changes 2021-12-10 12:06:26 +01:00
Julien Fontanet
39394f8c09 feat(@xen-orchestra/proxy): 0.15.5 2021-12-10 12:04:33 +01:00
Julien Fontanet
3283130dfc feat(xo-server): 5.84.3 2021-12-10 12:04:33 +01:00
Julien Fontanet
3146a591d0 feat(@xen-orchestra/backups): 0.16.2 2021-12-10 12:04:30 +01:00
Julien Fontanet
e478b1ec04 feat(vhd-lib): 2.0.3 2021-12-10 12:04:00 +01:00
Julien Fontanet
7bc4d14f46 feat(@xen-orchestra/fs): 0.19.2 2021-12-10 11:38:20 +01:00
Florent BEAUCHAMP
f3eeeef389 fix(fs/S3#list): should not look into all the file tree (#6048) 2021-12-09 14:06:21 +01:00
Florent BEAUCHAMP
8d69208197 fix(backups/MixinBackupWriter#_cleanVm): always returns an object (#6050) 2021-12-09 10:38:31 +01:00
Julien Fontanet
2c689af1a9 fix(backups): add random suffixes to task files to avoid collitions
See https://xcp-ng.org/forum/post/44661
2021-12-08 18:03:13 +01:00
Julien Fontanet
cb2a34c765 feat(backups/RemoteAdapter#cleanVm): show missing VHDs
Related to investigation on zammad#4156
2021-12-08 11:41:18 +01:00
Julien Fontanet
465c8f9009 feat(xo-web/about): show build commit in sources version (#6045) 2021-12-08 09:57:15 +01:00
Florent BEAUCHAMP
8ea4c1c1fd fix(vhd-lib): output parent locator in VhdAbstract#stream (#6035) 2021-12-07 14:14:39 +01:00
Julien Fontanet
ba0f7df9e8 feat(xapi-explore-sr): 0.4.1 2021-12-06 17:59:27 +01:00
Julien Fontanet
e47dd723b0 fix(xapi-explore-sr): add missing @xen-orchestra/defined dep
Introduced by 2412f8b1e
2021-12-06 17:43:47 +01:00
Julien Fontanet
fca6e2f6bf fix(fs/S3#_list): throw if result is truncated 2021-12-06 14:25:55 +01:00
Florent BEAUCHAMP
faa7ba6f24 fix(vhd-lib, fs): use rmtree and not rmTree (#6041) 2021-12-06 14:25:09 +01:00
Julien Fontanet
fc2dbbe3ee feat: release 5.65.1 2021-12-03 16:42:11 +01:00
Julien Fontanet
cc98b81825 fix(CHANGELOG): incorrect secion name Packages to release
Introduced in ae24b10da
2021-12-03 15:29:48 +01:00
Julien Fontanet
eb4a7069d4 feat(CHANGELOG): integrate released changes 2021-12-03 15:27:54 +01:00
Julien Fontanet
4f65d9214e feat(xo-server): 5.84.2 2021-12-03 15:23:59 +01:00
Julien Fontanet
4d3c8ee63c feat(@xen-orchestra/proxy): 0.15.4 2021-12-03 15:23:26 +01:00
Julien Fontanet
e41c1b826a feat(@xen-orchestra/backups): 0.16.1 2021-12-03 15:23:02 +01:00
Julien Fontanet
644bb48135 feat(xo-vmdk-to-vhd): 2.0.1 2021-12-03 15:22:23 +01:00
Julien Fontanet
c9809285f6 feat(vhd-lib): 2.0.2 2021-12-03 15:19:07 +01:00
Julien Fontanet
5704949f4d feat(@vates/compose): 2.1.0 2021-12-03 15:17:47 +01:00
Julien Fontanet
a19e00fbc0 fix(backups/_VmBackup#_selectBaseVm): cant read .uuid of undefined srcVdi (#6034)
See xoa-support#4263

The debug message is now clearer and has correct associated data.
2021-12-03 10:22:17 +01:00
Julien Fontanet
470a9b3e27 chore(decorate-with/README): document usage with @vates/compose 2021-12-02 21:37:25 +01:00
Julien Fontanet
ace31dc566 feat(compose): supports attaching extra params 2021-12-02 21:37:25 +01:00
Julien Fontanet
ed252276cb fix(compose): dont mutate passed functions array 2021-12-02 21:37:25 +01:00
Julien Fontanet
26d0ff3c9a fix(vhd-lib/VhdAbtract#stream): explicitely ignore differencing
Because parentLocator entries handling are broken.
2021-12-02 16:48:19 +01:00
Florent Beauchamp
ff806a3ff9 fix(vhd-lib): use parent locator of root disk in VhdSynthetic 2021-12-02 16:48:19 +01:00
Florent Beauchamp
949b17dee6 fix(vhd-lib): fix footer and header accessor in vhd hierarchy 2021-12-02 16:48:19 +01:00
Florent Beauchamp
b1fdc68623 fix(vhd-lib): platformDataSpace in sectors not bytes 2021-12-02 16:48:19 +01:00
Florent BEAUCHAMP
f502facfd1 fix(backup): createAlias to data instead of circular alias (#6029) 2021-12-02 13:56:20 +01:00
Mathieu
bf0a74d709 fix(xo-web/SortedTable): properly disable collapsed actions (#6023) 2021-12-02 13:48:22 +01:00
Julien Fontanet
7296d98313 fix(backups/RemoteAdapter#_createSyntheticStream): only dispose once
See https://xcp-ng.org/forum/topic/5257/problems-building-from-source/20
2021-12-01 13:24:46 +01:00
Julien Fontanet
30568ced49 fix(vhd-lib/VhdSynthetic): fix parent UUID assert
See https://xcp-ng.org/forum/topic/5257/problems-building-from-source
2021-12-01 12:54:00 +01:00
Julien Fontanet
5e1284a9e0 chore: refresh yarn.lock
Introduced by 03d6e3356 due to extra files in my repo…
2021-12-01 12:33:43 +01:00
Julien Fontanet
27d2de872a chore: update to lint-staged@^12.0.3
See https://xcp-ng.org/forum/topic/5257/problems-building-from-source

Fix missing peer dependency `inquirer`.
2021-12-01 12:19:19 +01:00
Julien Fontanet
03d6e3356b chore: refresh yarn.lock 2021-12-01 12:17:52 +01:00
Julien Fontanet
ca8baa62fb fix(xo-vmdk-to-vhd): remove duplicate promise-toolbox dep
See https://xcp-ng.org/forum/topic/5257/problems-building-from-source
2021-12-01 12:17:25 +01:00
Florent BEAUCHAMP
2f607357c6 feat: release 5.65 (#6028) 2021-11-30 17:45:31 +01:00
Julien Fontanet
2de80f7aff feat(xo-server): 5.84.1 2021-11-30 17:04:37 +01:00
Julien Fontanet
386058ed88 chore(CHANGELOG): update vhd-lib version
Introduced by 033fa9e067
2021-11-30 17:04:25 +01:00
Julien Fontanet
033fa9e067 feat(vhd-lib): 2.0.1 2021-11-30 17:00:49 +01:00
Julien Fontanet
e8104420b5 fix(vhd-lib): add missing @vates/async-each dep
Introduced by 56c3d70149
2021-11-30 16:59:01 +01:00
Florent BEAUCHAMP
ae24b10da0 feat: technical release (#6025) 2021-11-30 15:45:36 +01:00
Florent BEAUCHAMP
407b05b643 fix(backups): use the full VHD hierarchy for restore (#6027) 2021-11-30 15:27:54 +01:00
Julien Fontanet
79bf8bc9f6 fix(xo-server): add missing complex-matcher dep
Introduced by 65d6dca52
2021-11-30 09:35:10 +01:00
Julien Fontanet
65d6dca52c feat(xo-server/xo.getAllObjects): add complex-matcher support
See https://xcp-ng.org/forum/topic/5238/xo-cli-command-list-vms-which-ha-snapshots
2021-11-29 19:00:44 +01:00
Julien Fontanet
66eeefbd7b feat(xo-server/vm.set): suspendSr support
See #4163
2021-11-29 14:44:02 +01:00
Mathieu
c10bbcde00 feat(xo-web,xo-server/snapshot): ability to export snapshot memory (#6015)
See xoa-support#4113
2021-11-29 14:08:02 +01:00
Julien Fontanet
fe69928bcc feat(xo-server/pool.set): suspendSr support
See #4163
2021-11-29 10:53:49 +01:00
Florent BEAUCHAMP
3ad8508ea5 feat(vhd-lib/VhdDirectory#_writeChunk): use outputFile (#6019)
This is much faster than manually creating parent directories.
2021-11-29 09:52:47 +01:00
Florent BEAUCHAMP
1f1ae759e0 feat(fs): use keepalive for queries to s3 (#6018) 2021-11-27 10:10:19 +01:00
Mathieu
6e4bfe8f0f feat(xo-web,xo-server): ability to create a cloud config network template (#5979)
Fixes #5931
2021-11-26 10:28:22 +01:00
Rajaa.BARHTAOUI
6276c48768 fix(xo-server/proxies): remove state cache after the proxy update (#6013) 2021-11-26 10:02:30 +01:00
Julien Fontanet
f6005baf1a feat(vhd-cli info): human format some fields 2021-11-25 18:29:25 +01:00
Julien Fontanet
b62fdbc6a6 feat(vhd-lib/Constants): make disk types and platorms maps
BREAKING
2021-11-25 18:02:26 +01:00
Florent BEAUCHAMP
bbd3d31b6a fix(backups/writeVhd): await outputStream (#6017) 2021-11-25 16:34:21 +01:00
Julien Fontanet
481ac92bf8 fix(backups/RemoteAdapter): dont use .dir suffix (#6016)
An alias can point to any kind of VHD, file or directory.

Also, for now, aliases are only used for VHD directories.
2021-11-25 15:31:25 +01:00
Florent BEAUCHAMP
a2f2b50f57 feat(s3): allow self signed certificate (#5961) 2021-11-25 11:32:08 +01:00
Julien Fontanet
bbab9d0f36 fix(xapi/vm/_assertHealthyVdiChain): ignore unused unmanaged VDIs
Fixes xoa-support#4280
2021-11-25 11:28:50 +01:00
Florent BEAUCHAMP
7f8190056d fix(backups/RemoteAdapter): unused import and path in writeVhd (#6014) 2021-11-25 11:26:59 +01:00
Julien Fontanet
8f4737c5f1 chore: upgrade to jsonrpc-websocket-client@0.7.2 2021-11-25 10:33:39 +01:00
Julien Fontanet
c5adba3c97 fix(xo-lib): upgrade to jsonrpc-websocket-client@^0.7.2
Fix default value for `protocols` option.
2021-11-25 10:28:40 +01:00
Julien Fontanet
d91eb9e396 fix(CHANGELOG.unreleased): fix duplicate package
Introduced by d5f21bc27c
2021-11-25 10:27:01 +01:00
Julien Fontanet
1b47102d6c chore: refresh yarn.lock 2021-11-25 00:06:01 +01:00
Julien Fontanet
cd147f3fc5 feat(xo-cli): 0.12.0 2021-11-25 00:03:21 +01:00
Julien Fontanet
c3acdc8cbd feat(xo-cli register): --allowUnauthorized flag
See https://xcp-ng.org/forum/topic/5226/xo-cli-and-using-self-signed-certificates
2021-11-25 00:02:08 +01:00
Julien Fontanet
c3d755dc7b feat(xo-lib): 0.11.0 2021-11-24 23:59:05 +01:00
Julien Fontanet
6f49c48bd4 feat(xo-lib): upgrade to jsonrpc-websocket-client@0.7.1
Use secure protocol (`wss`) by default and contains a fix for `rejectUnauthorized` option.
2021-11-24 23:55:23 +01:00
Julien Fontanet
446f390b3d feat(xo-lib): allow passing opts to JsonRpcWebSocketClient 2021-11-24 23:53:33 +01:00
Julien Fontanet
966091593a chore(vhd-lib): rename build{Footer,Header} to unpack{Footer,Header}
To make it clearer that it unpacks a binary footer/header to a JS object.
2021-11-24 23:34:23 +01:00
Florent Beauchamp
d5f21bc27c feat(backups): handle the choice of the vhd type to use during backup 2021-11-24 21:08:15 +01:00
Florent Beauchamp
8c3b452c0d feat(backup): DeltaBackupWriter can handle any type of vhd 2021-11-24 21:08:15 +01:00
Florent Beauchamp
9cacb92c2c feat(backups): remoteadapter can delete any type of vhd 2021-11-24 21:08:15 +01:00
Florent Beauchamp
7a1b56db87 feat(backups): checkvhd can handle all vhd types 2021-11-24 21:08:15 +01:00
Florent Beauchamp
56c3d70149 feat(vhd-lib): generate a vhd directory from a vhd stream 2021-11-24 21:08:15 +01:00
Florent Beauchamp
1ec8fcc73f feat(vhd-lib): extract computeSectorsPerBlock, computeBlockBitmapSize and computeSectorOfBitmap to utils 2021-11-24 21:08:15 +01:00
Rajaa.BARHTAOUI
060b16c5ca feat(xo-web/backup/logs): identify XAPI errors (#6001)
See xoa-support#3977
2021-11-24 15:25:27 +01:00
Yannick Achy
0acc52e3e9 fix(docs): move NOBAK from Delta to general concepts (#6012)
Co-authored-by: yannick Achy <yannick.achy@vates.fr>
2021-11-24 09:10:35 +01:00
Florent Beauchamp
a9c2c9b6ba refator(vhd-lib): move createSyntheticStream to backup, move stream() tests to vhdabstracts 2021-11-23 15:56:25 +01:00
Florent Beauchamp
5b2a6bc56b chore(vhd-lib/createSyntheticStream): based on VhdSynthetic#stream() 2021-11-23 15:56:25 +01:00
Florent Beauchamp
19c8693b62 fix(vhd-lib/VhdSynthetic#readHeaderAndFooter()): root vhd can be a dynamic and check chaining 2021-11-23 15:56:25 +01:00
Florent Beauchamp
c4720e1215 fix(vhd-lib/VhdAbstract#stream()): stream.length should contain blocks 2021-11-23 15:56:25 +01:00
Florent BEAUCHAMP
b6d4c8044c feat(backups/cleanVm) : support VHD dirs and aliases (#6000) 2021-11-22 17:14:29 +01:00
Florent BEAUCHAMP
57dd6ebfba chore(vhd-lib): use openVhd for chain and checkChain (#5997) 2021-11-22 15:50:30 +01:00
Julien Fontanet
c75569f278 feat(proxy/authentication.setToken): API method to change auth token 2021-11-18 18:14:19 +01:00
Julien Fontanet
a8757f9074 chore(proxy/authentication): use private field for auth token
More idiomatic and potentially more secure.
2021-11-18 18:02:26 +01:00
Julien Fontanet
f5c3bf72e5 fix(mixins/Config): dont create multiple stop listeners 2021-11-18 16:41:46 +01:00
Florent BEAUCHAMP
d7ee13f98d feat(vhd-lib/merge): use Vhd* classes (#5950) 2021-11-18 11:30:04 +01:00
Julien Fontanet
1f47aa491d fix(xo-server/pool.mergeInto): dont export masterPassword on error
Fixes xoa-support#4265
2021-11-17 22:42:00 +01:00
Julien Fontanet
ffe430758e feat(async-each): run async fn for each item in (async) iterable 2021-11-17 22:27:43 +01:00
Florent BEAUCHAMP
a4bb453401 feat(vhd-lib): add VhdAbstract#{stream,rawContent}() methods (#5992) 2021-11-17 09:16:34 +01:00
Florent BEAUCHAMP
5c8ebce9eb feat(vhd-lib): add vhd synthetic class (#5990) 2021-11-17 09:15:13 +01:00
Julien Fontanet
8b0cee5e6f feat(@xen-orchestra/backups-cli): 0.6.1 2021-11-16 14:26:50 +01:00
Julien Fontanet
e5f4f825b6 fix(xapi): group retry options together
- it does not make sense to only set the delay or the number of tries without the other
- it allow using any options either as default or in config without worrying about incompatibilities (e.g. `tries` & `retries`)
2021-11-16 14:26:11 +01:00
Julien Fontanet
b179dc1d56 chore: update dev deps 2021-11-15 23:43:20 +01:00
Julien Fontanet
7281c9505d fix(CHANGELOG.unreleased): new release backups-cli
`vhd-cli@^1` compat was broken by 7ef89d504
2021-11-15 14:46:51 +01:00
Julien Fontanet
4db82f447d fix(xo-web/about): update link to create issue
Related to 71b8e625f

See #5977
2021-11-15 14:22:46 +01:00
Julien Fontanet
834da3d2b4 fix(vhd-lib/VhdAbstract): remove duplicate field declarations
Introduced in c6c3a33dc
2021-11-10 16:04:59 +01:00
Julien Fontanet
c6c3a33dcc feat(vhd-cli/VhdAbstract): make derived values getters
It makes them read-only, make sure they are always up-to-date with the header and avoid duplicating their logic.
2021-11-10 15:45:42 +01:00
Julien Fontanet
fb720d9b05 fix(docs/xoa): use wget instead of curl
The version of curl installed on XCP-ng 8.2.0, (curl 7.29.0) does not support any encryption algos available on https://xoa.io
2021-11-09 19:55:49 +01:00
Florent Beauchamp
547d318e55 fix(vhd-lib): write parent locator before the blocks 2021-11-08 18:03:46 +01:00
Florent Beauchamp
cb5a2c18f2 fix(vhd-lib): ensure block allocation table is written after modifying it in tests 2021-11-08 18:03:46 +01:00
Florent Beauchamp
e01ca3ad07 refactor(vhd-lib): use method from test/utils when possible 2021-11-08 18:03:46 +01:00
Florent Beauchamp
314d193f35 fix(vhd-lib): set platform code when setting unique parent locator 2021-11-08 18:03:46 +01:00
Florent Beauchamp
e0200bb730 refactor(vhd-lib): split tests 2021-11-08 18:03:46 +01:00
Florent BEAUCHAMP
2a3f4a6f97 feat(vhd-lib): handle file alias (#5962) 2021-11-08 14:46:00 +01:00
Nicolas Raynaud
88628bbdc0 chore(xo-vmdk-to-vhd): fix tests (#5981)
Introduced by fdf52a3d59

Follow-up of b00750bfa3
2021-11-07 15:38:45 +01:00
Olivier Lambert
cb7b695a72 feat(docs/netbox): add how to add a custom field in Netbox 3 (#5984) 2021-11-07 13:44:02 +01:00
Julien Fontanet
ae549e2a88 fix(jest): dont use fake timers by default
Introduced by 844efb88d

The upgrade to Jest 27 (15630aee5) revealed this issue.
2021-11-05 13:24:51 +01:00
Julien Fontanet
7f9a970714 fix(log/USAGE): document filter array
Introduced by d3cb31f1a
2021-11-04 10:45:58 +01:00
Julien Fontanet
7661d3372d fix(xen-api/USAGE): add httpProxy option
Introduced by 2412f8b1e
2021-11-04 10:38:22 +01:00
Julien Fontanet
dbb4f34015 chore(xapi/VDI_destroy): decorate with retry.wrap()
- more efficient than creating a function at each call
- better logging
2021-11-03 23:10:58 +01:00
Julien Fontanet
8f15a4c29d feat(ISSUE_TEMPLATE/bug_report): add hypervisor version 2021-11-03 16:55:17 +01:00
Florent BEAUCHAMP
1b0a885ac3 feat(vhd-cli): use any remote for copy and compare (#5927) 2021-11-03 15:45:52 +01:00
Nicolas Raynaud
f7195bad88 fix(xo-server): fix ova multipart upload (#5976)
Introduced by 0451aaeb5c
2021-11-02 17:43:45 +01:00
Julien Fontanet
15630aee5e chore: update dev deps 2021-11-02 13:43:49 +01:00
Florent BEAUCHAMP
a950a1fe24 refactor(vhd-lib): centralize test methods (#5968) 2021-11-02 09:53:30 +01:00
Julien Fontanet
71b8e625fe chore: update issue templates (#5974) 2021-10-30 15:06:51 +02:00
Julien Fontanet
e7391675fb feat(@xen-orchestra/proxy): 0.15.2 2021-10-29 17:41:02 +02:00
Julien Fontanet
84fdd3fe4b fix(proxy/api/ndJsonStream): send header for empty iterables
Introduced by ed987e161
2021-10-29 17:05:05 +02:00
Julien Fontanet
4dc4b635f2 feat(@xen-orchestra/proxy): 0.15.1 2021-10-29 15:50:42 +02:00
Julien Fontanet
ee0c6d7f8b feat(xen-api): 0.35.1 2021-10-29 15:50:05 +02:00
Julien Fontanet
a637af395d fix(xen-api): add missing dep proxy-agent
Introduced by 2412f8b1e
2021-10-29 15:40:25 +02:00
Julien Fontanet
59fb612315 feat(@xen-orchestra/proxy): 0.15.0 2021-10-29 15:20:09 +02:00
Mathieu
59b21c7a3e feat: release 5.64 (#5971) 2021-10-29 11:40:16 +02:00
Mathieu
40f881c2ac feat: technical release (#5970) 2021-10-28 16:30:00 +02:00
Rajaa.BARHTAOUI
1d069683ca feat(xo-web/host): manage evacuation failure during host shutdown (#5966) 2021-10-28 14:23:43 +02:00
Julien Fontanet
de1d942b90 fix(xo-server/listPoolsMatchingCriteria): check{Sr,Pool}Name is not a function
Fixes xoa-support#4193

Introduced by cd8c618f0
2021-10-28 13:29:32 +02:00
Rajaa.BARHTAOUI
fc73971d63 feat(xo-server,xo-web/menu): proxy upgrade notification (#5930)
See xoa-support#4105
2021-10-28 10:52:23 +02:00
Rajaa.BARHTAOUI
eb238bf107 feat(xo-web/pool/advanced, xen-api/{get,put}Resource): introduce backup network (#5957) 2021-10-28 10:21:48 +02:00
Florent BEAUCHAMP
2412f8b1e2 feat(xen-api): add HTTP proxy support (#5958)
See #5436

Using an IP address as HTTPS proxy show this warning: `DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066`

The corresponding issue is there : TooTallNate/node-https-proxy-agent#127
2021-10-27 17:30:41 +02:00
Pierre Donias
0c87dee31c fix(xo-web/xoa): handle string expiration dates (#5967)
See xoa-support#4114
See xoa-support#4192

www-xo may return a string instead of a number in some rare cases
2021-10-27 16:59:59 +02:00
Mathieu
215146f663 feat(xo-web/vm/export): allow to copy the export URL (#5948) 2021-10-27 16:58:09 +02:00
Mathieu
9fe1069df0 feat(xo-web/host): format logs (#5943)
See xoa-support#4100
2021-10-27 15:41:29 +02:00
Julien Fontanet
d2c5b52bf1 feat(backups): enable merge worker by default
Related to 47f9da216

It can still be disabled in case of problems:

```toml
[backups]
disableMergeWorker = true
```
2021-10-27 09:29:50 +02:00
Pierre Donias
12153a414d fix(xo-server/{clone,copy}Vm): force is_a_template to false on the new VM (#5955)
See xoa-support#4137
2021-10-26 16:53:09 +02:00
Pierre Donias
5ec1092a83 fix(xo-server-netbox/test): perform test with a 50-character name (#5963)
See https://xcp-ng.org/forum/topic/5111
See https://netbox.readthedocs.io/en/stable/release-notes/version-2.10/#other-changes > #5011

Versions of Netbox <2.10 only allow cluster type names of length <= 50.
2021-10-26 15:55:11 +02:00
Julien Fontanet
284169a2f2 chore(vhd-lib/VhdAbstract): format with Prettier
Introduced by 7ef89d504
2021-10-25 16:12:49 +02:00
Julien Fontanet
838bfbb75f fix(backups/cleanVm): wait for merge to finish
Introduced by 9c83e70a2
2021-10-25 09:14:38 +02:00
Julien Fontanet
a448da77c9 fix(backups/cleanVm): mergeLimiter support
Introduced by 9c83e70a2
2021-10-25 09:13:58 +02:00
Rajaa.BARHTAOUI
268fb22d5f feat(xo-web/host/advanced): add button to disable/enable host (#5952) 2021-10-20 16:39:54 +02:00
Julien Fontanet
07cc4c853d fix(vhd-lib): fix block table properties & accessors
Fixes #5956

Introduced by 7ef89d504
2021-10-18 23:13:55 +02:00
Florent BEAUCHAMP
c62d727cbe feat(vhd-cli compare): compare metadata and content of two VHDs (#5920) 2021-10-18 16:21:40 +02:00
Florent BEAUCHAMP
7ef89d5043 feat(vhd-{cli,lib}): implement chunking and copy command (#5919) 2021-10-18 14:56:58 +02:00
Mathieu
9ceba1d6e8 feat(xo-web/jobs): add button to copy jobs IDs (#5951)
Useful to create a `job.runSequence` job. Follow-up of #5944.
2021-10-15 14:25:02 +02:00
Pierre Donias
e2e453985f fix(xo-web/job): properly handle array arguments (#5944)
See https://xcp-ng.org/forum/topic/5010

When creating/editing a job, properties of type `array` must not go through the
cross product builder, they must be saved as arrays.
2021-10-15 10:42:33 +02:00
Florent BEAUCHAMP
84dccd800f feat(backups): clean up other schedules snapshots (#5949)
Fixes xoa-support#4129
2021-10-14 14:44:40 +02:00
Julien Fontanet
f9734d202b chore(backups/_VmBackup): remove unused import 2021-10-14 13:51:29 +02:00
Julien Fontanet
d3cb0f4672 feat(xo-server): 5.82.4 2021-10-14 09:47:39 +02:00
Julien Fontanet
c198bbb6fa feat(@xen-orchestra/backups): 0.14.0 2021-10-14 09:45:20 +02:00
Julien Fontanet
c965a89509 feat(xo-server-netbox): 0.3.2 2021-10-14 09:43:38 +02:00
Julien Fontanet
47f9da2160 feat(backups/MixinBackupWriter): use merge worker if not disabled 2021-10-13 16:26:12 +02:00
Julien Fontanet
348a75adb4 feat(backups): merge worker implementation
This CLI must be run directly in the directory where the remote is mounted.

It's only compatible with local remote at the moment.

To start the worker:

```js
const MergeWorker = require('@xen-orchestra/backups/merge-worker/index.js')

await MergeWorker.run(remotePath)
```

To register a VM backup dir to be clean (thus merging its unused VHD), create a file in the queue directory containing the VM UUID:

```
> echo cc700fe2-724e-44a5-8663-5f8f88e05e34 > .queue/clean-vm/20211013T142401Z
```

The queue directory is available as `MergeWorker.CLEAN_VM_QUEUE`.
2021-10-13 16:25:21 +02:00
Julien Fontanet
332218a7f7 feat(backups): move merge responsability to cleanVm 2021-10-13 16:10:19 +02:00
Julien Fontanet
6d7a26d2b9 chore(backups/MixinBackupWriter): use private fields 2021-10-13 10:02:57 +02:00
Pierre Donias
d19a748f0c fix(xo-server-netbox): support older versions of Netbox (#5946)
Fixes #5898
See https://netbox.readthedocs.io/en/stable/release-notes/version-2.7/#api-choice-fields-now-use-string-values-3569
2021-10-13 09:28:46 +02:00
Julien Fontanet
9c83e70a28 feat(backups/RemoteAdapter#cleanVm): configurable merge limiter 2021-10-12 09:17:42 +02:00
Rajaa.BARHTAOUI
abcabb736b feat(xo-web/tasks): filter out short tasks with a default filter (#5941)
See xoa-support#4096
2021-10-08 16:42:16 +02:00
Julien Fontanet
0451aaeb5c fix(xo-server/vm.import): restore non-multipart upload (#5936)
See xoa-support#4085

Introduced by fdf52a3d5

Required by `xo-cli`.
2021-10-08 15:24:21 +02:00
Julien Fontanet
880c45830c fix(xo-cli): http-request-plus@0.12 has no longer default export
Introduced by 62e5ab699
2021-10-07 17:11:54 +02:00
Julien Fontanet
5fa16d2344 chore: format with Prettier 2021-10-07 14:40:41 +02:00
Julien Fontanet
9e50b5dd83 feat(proxy): logging is now dynamically configurable
It was done for xo-server in f20d5cd8d
2021-10-06 16:54:57 +02:00
Julien Fontanet
29d8753574 chore(backups/VmBackup#_selectBaseVm): add debug logs 2021-10-06 16:48:42 +02:00
Pierre Donias
f93e1e1695 feat: release 5.63.0 (#5925) 2021-09-30 15:25:34 +02:00
Pierre Donias
0eaac8fd7a feat: technical release (#5924) 2021-09-30 11:17:45 +02:00
Julien Fontanet
06c71154b9 fix(xen-api/_setHostAddressInUrl): pass params in array
Introduced in fb21e4d58
2021-09-30 10:32:12 +02:00
Julien Fontanet
0e8f314dd6 fix(xo-web/new-vm): don't send default networkConfig (#5923)
Fixes #5918
2021-09-30 09:37:12 +02:00
Florent BEAUCHAMP
f53ec8968b feat(xo-web/SortedTable): move filter and pagination to top (#5914) 2021-09-29 17:35:46 +02:00
Mathieu
919d118f21 feat(xo-web/health): filter duplicated MAC addresses by running VMs (#5917)
See xoa-support#4054
2021-09-24 17:25:42 +02:00
Mathieu
216b759df1 feat(xo-web/health): hide CR VMs duplicated MAC addresses (#5916)
See xoa-support#4054
2021-09-24 15:52:34 +02:00
Julien Fontanet
01450db71e fix(proxy/backup.run): clear error on license issue
Fixes https://xcp-ng.org/forum/topic/4901/backups-silently-fail-with-invalid-xo-proxy-license
2021-09-24 13:15:32 +02:00
Julien Fontanet
ed987e1610 fix(proxy/api/ndJsonStream): send JSON-RPC error if whole iteration failed
See https://xcp-ng.org/forum/topic/4901/backups-silently-fail-with-invalid-xo-proxy-license
2021-09-24 13:15:24 +02:00
Florent BEAUCHAMP
2773591e1f feat(xo-web): add go back to ActionButton and use it when saving a backup (#5913)
See xoa-support#2149
2021-09-24 11:38:37 +02:00
Pierre Donias
a995276d1e fix(xo-server-netbox): better handle missing uuid custom field (#5909)
Fixes #5905
See #5806
See #5834
See xoa-support#3812

- Check if `uuid` custom field has correctly been configured before synchronizing
- Delete VMs that don't have a UUID before synchronizing VMs to avoid conflicts
2021-09-22 18:08:09 +02:00
Nicolas Raynaud
ffb6a8fa3f feat(VHD import): ensure uploaded file is a VHD (#5906) 2021-09-21 16:25:50 +02:00
Pierre Donias
0966efb7f2 fix(xo-server-netbox): handle nested prefixes (#5908)
See xoa-support#4018

When assigning prefixes to VMs, always pick the smallest prefix that the IP
matches
2021-09-21 09:55:47 +02:00
Julien Fontanet
4a0a708092 feat: release 5.62.1 2021-09-17 10:04:36 +02:00
Julien Fontanet
6bf3b6f3e0 feat(xo-server): 5.82.2 2021-09-17 09:24:32 +02:00
Julien Fontanet
8f197fe266 feat(@xen-orchestra/proxy): 0.14.6 2021-09-17 09:24:05 +02:00
Julien Fontanet
e1a3f680f2 feat(xen-api): 0.34.2 2021-09-17 09:23:28 +02:00
Julien Fontanet
e89cca7e90 feat: technical release 2021-09-17 09:19:26 +02:00
Nicolas Raynaud
5bb2767d62 fix(xo-server/{disk,vm}.import): fix import of very small VMDK files (#5903) 2021-09-17 09:17:34 +02:00
Julien Fontanet
95f029e0e7 fix(xen-api/putResource): fix non-stream use case
Introduced by ea10df8a92
2021-09-14 17:42:20 +02:00
Julien Fontanet
fb21e4d585 chore(xen-api/_setHostAddressInUrl): use _roCall to fetch network ref
Introduced by a84fac1b6
2021-09-14 17:42:20 +02:00
Julien Fontanet
633805cec9 fix(xen-api/_setHostAddressInUrl): correctly fetch network ref
Introduced by a84fac1b6
2021-09-14 17:42:20 +02:00
Marc Ungeschikts
b8801d7d2a "rentention" instead of "retention" (#5904) 2021-09-14 16:30:10 +02:00
Julien Fontanet
a84fac1b6a fix(xen-api/{get,put}Resource): use provided address when possible
Fixes #5896

Introduced by ea10df8a92

Don't use the address provided by XAPI when connecting to the pool master and without a default migration network as it will unnecessarily break NATted hosts.
2021-09-14 13:52:34 +02:00
Julien Fontanet
a9de4ceb30 chore(xo-server/config.toml): explicit auth delay is per user 2021-09-12 10:55:31 +02:00
Julien Fontanet
827b55d60c fix(xo-server/config.toml): typo 2021-09-12 10:54:49 +02:00
Julien Fontanet
0e1fe76b46 chore: update dev deps 2021-09-09 13:48:15 +02:00
Julien Fontanet
097c9e8e12 feat(@xen-orchestra/proxy): 0.14.5 2021-09-07 19:02:57 +02:00
Pierre Donias
266356cb20 fix(xo-server/xapi-objects-to-xo/VM/addresses): handle newline-delimited IPs (#5897)
See xoa-support#3812
See #5860

This is related to a505cd9 which handled space delimited IPs, but apparently,
IPs can also be newline delimited depending on which Xen tools version is used.
2021-09-03 12:30:47 +02:00
Julien Fontanet
6dba39a804 fix(xo-server/vm.set): fix converting to BIOS (#5895)
Fixes xoa-support#3991
2021-09-02 14:11:39 +02:00
Olivier Lambert
3ddafa7aca fix(docs/xoa): clarify first console connection (#5894) 2021-09-01 12:51:33 +02:00
Julien Fontanet
9d8e232684 chore(xen-api): dont import promise-toolbox/retry twice
Introduced by ea10df8a9
2021-08-31 12:28:23 +02:00
Anthony Stivers
bf83c269c4 fix(xo-web/user): SSH key formatting (#5892)
Fixes #5891

Allow SSH key to be broken anywhere to avoid breaking page formatting.
2021-08-31 11:42:25 +02:00
Pierre Donias
54e47c98cc feat: release 5.62.0 (#5893) 2021-08-31 10:59:07 +02:00
Pierre Donias
118f2594ea feat: technical release (#5889) 2021-08-30 15:40:26 +02:00
Julien Fontanet
ab4fcd6ac4 fix(xen-api/{get,put}Resource): correctly fetch host
Introduced by ea10df8a9
2021-08-30 15:23:42 +02:00
Pierre Donias
ca6f345429 feat: technical release (#5888) 2021-08-30 12:08:10 +02:00
Pierre Donias
79b8e1b4e4 fix(xo-server-auth-ldap): ensure-array dependency (#5887) 2021-08-30 12:01:06 +02:00
Pierre Donias
cafa1ffa14 feat: technical release (#5886) 2021-08-30 11:01:14 +02:00
Mathieu
ea10df8a92 feat(xen-api/{get,put}Resource): use default migration network if available (#5883) 2021-08-30 00:14:31 +02:00
Julien Fontanet
85abc42100 chore(xo-web): use sass instead of node-sass
Fixes build with Node 16
2021-08-27 14:22:00 +02:00
Mathieu
4747eb4386 feat(host): display warning for eol host version (#5847)
Fixes #5840
2021-08-24 14:43:01 +02:00
tisteagle
ad9cc900b8 feat(docs/updater): add nodejs.org to required domains (#5881) 2021-08-22 16:33:16 +02:00
Pierre Donias
6cd93a7bb0 feat(xo-server-netbox): add primary IPs to VMs (#5879)
See xoa-support#3812
See #5633
2021-08-20 12:47:29 +02:00
Julien Fontanet
3338a02afb feat(fs/getSyncedHandler): returns disposable to an already synced remote
Also, no need to forget it.
2021-08-20 10:14:39 +02:00
Julien Fontanet
31cfe82224 chore: update to index-modules@0.4.3
Fixes #5877

Introduced by 030477454

This new version fixes the `--auto` mode used by `xo-web`.
2021-08-18 10:08:10 +02:00
Pierre Donias
70a191336b fix(CHANGELOG): missing PR link (#5876) 2021-08-17 10:13:22 +02:00
Julien Fontanet
030477454c chore: update deps 2021-08-17 09:59:42 +02:00
Pierre Donias
2a078d1572 fix(xo-server/host): clearHost argument needs to have a $pool property (#5875)
See xoa-support#3118
Introduced by b2a56c047c
2021-08-17 09:51:36 +02:00
Julien Fontanet
3c1f96bc69 chore: update dev deps 2021-08-16 14:10:18 +02:00
Mathieu
7d30bdc148 fix(xo-web/TabButtonLink): should not be empty on small screens (#5874) 2021-08-16 09:45:44 +02:00
Mathieu
5d42961761 feat(xo-server/network.create): allow pool admins (#5873) 2021-08-13 14:22:58 +02:00
Julien Fontanet
f20d5cd8d3 feat(xo-server): logging is now dynamically configurable 2021-08-12 17:30:56 +02:00
Julien Fontanet
f5111c0f41 fix(mixins/Config#watch): use deep equality to check changes
Because objects (and arrays) will always be new ones and thus different.
2021-08-12 17:29:57 +02:00
Pierre Donias
f5473236d0 fix(xo-web): dont warn when restoring XO config (#5872) 2021-08-12 09:52:45 +02:00
Julien Fontanet
d3cb31f1a7 feat(log/configure): filter can be an array 2021-08-11 18:09:42 +02:00
Pierre Donias
d5f5cdd27a fix(xo-server-auth-ldap): create logger inside plugin (#5864)
The plugin was wrongly expecting a logger instance to be passed on instantiation
2021-08-11 11:21:22 +02:00
Pierre Donias
656dc8fefc fix(xo-server-ldap): handle groups with no members (#5862)
See xoa-support#3906
2021-08-10 14:12:39 +02:00
Pierre Donias
a505cd9567 fix(xo-server/xapi-objects-to-xo/VM/addresses): handle old tools alias properties (#5860)
See https://xcp-ng.org/forum/topic/4810
See #5805
2021-08-10 10:22:13 +02:00
Pierre Donias
f2a860b01a feat: release 5.61.0 (#5867) 2021-07-30 16:48:13 +02:00
Pierre Donias
1a5b93de9c feat: technical release (#5866) 2021-07-30 16:31:16 +02:00
Pierre Donias
0f165b33a6 feat: technical release (#5865) 2021-07-30 15:21:49 +02:00
Pierre Donias
4f53555f09 Revert "chore(backups/DeltaReplication): unify base VM detection" (#5861)
This reverts commit 9139c5e9d6.
See https://xcp-ng.org/forum/topic/4817
2021-07-30 14:55:00 +02:00
Pierre Donias
175be44823 feat(xo-web/VM/advanced): handle pv_in_pvh virtualization mode (#5857)
And handle unknown virtualization modes by showing the raw string
2021-07-28 18:41:22 +02:00
Julien Fontanet
20a6428290 fix(xo-server/xen-servers): fix lodash/pick import
Introduced by 4b4bea5f3

Fixes #5858
2021-07-28 08:48:17 +02:00
Julien Fontanet
4b4bea5f3b chore(xo-server): log ids on xapiObjectToXo errors 2021-07-27 15:05:00 +02:00
Pierre Donias
c82f860334 feat: technical release (#5856) 2021-07-27 11:08:53 +02:00
Pierre Donias
b2a56c047c feat(xo-server/clearHost): use pool's default migration network (#5851)
Fixes #5802
See xoa-support#3118
2021-07-27 10:44:30 +02:00
Julien Fontanet
bc6afc3933 fix(xo-server): don't fail on invalid pool pattern
Fixes #5849
2021-07-27 05:13:45 +02:00
Pierre Donias
280e4b65c3 feat(xo-web/VM/{shutdown,reboot}): ask user if they want to force when no tools (#5855)
Fixes #5838
2021-07-26 17:22:31 +02:00
Julien Fontanet
c6f22f4d75 fix(backups): block start_on operation on replicated VMs (#5852) 2021-07-26 15:01:11 +02:00
Pierre Donias
4bed8eb86f feat(xo-server-netbox): optionally allow self-signed certificates (#5850)
See https://xcp-ng.org/forum/topic/4786/netbox-plugin-does-not-allow-self-signed-certificate
2021-07-23 09:53:02 +02:00
Julien Fontanet
c482f18572 chore(xo-web/vm/tab-advanced): shutdown is a valid operation 2021-07-23 09:49:32 +02:00
Mathieu
d7668acd9b feat(xo-web/sr/tab-disks): display the active vdi of the basecopy (#5826)
See xoa-support#3446
2021-07-21 09:32:24 +02:00
Julien Fontanet
05b978c568 chore: update dev deps 2021-07-20 10:20:52 +02:00
Julien Fontanet
62e5ab6990 chore: update to http-request-plus@0.12.0 2021-07-20 10:03:16 +02:00
Mathieu
12216f1463 feat(xo-web/vm): rescan ISO SRs available in console view (#5841)
See xoa-support#3896
See xoa-support#3888
See xoa-support#3909
Continuity of d7940292d0
Introduced by f3501acb64
2021-07-16 17:02:10 +02:00
Pierre Donias
cbfa13a8b4 docs(netbox): make it clear that the uuid custom field needs to be lower case (#5843)
Fixes #5831
2021-07-15 09:45:05 +02:00
Pierre Donias
03ec0cab1e feat(xo-server-netbox): add data field to Netbox API errors (#5842)
Fixes #5834
2021-07-13 17:22:51 +02:00
mathieuRA
d7940292d0 feat(xo-web/vm): rescan ISO SRs available in console view 2021-07-12 11:55:02 +02:00
Julien Fontanet
9139c5e9d6 chore(backups/DeltaReplication): unify base VM detection
Might help avoiding the *unable to find base VM* error.
2021-07-09 15:14:37 +02:00
Julien Fontanet
65e62018e6 chore(backups/importDeltaVm): dont explicitly wait for export tasks
Might be related to stuck importation issues.
2021-07-08 09:56:06 +02:00
Julien Fontanet
138a3673ce fix(xo-server/importConfig): fix this._app.clean is not a function
Fixes #5836
2021-07-05 17:57:47 +02:00
Pierre Donias
096f443b56 feat: release 5.60.0 (#5833) 2021-06-30 15:49:52 +02:00
Pierre Donias
b37f30393d feat: technical release (#5832) 2021-06-30 11:07:14 +02:00
Ronan Abhamon
f095a05c42 feat(docs/load_balancing): add doc about VM anti-affinity mode (#5830)
* feat(docs/load_balancing): add doc about VM anti-affinity mode

Signed-off-by: Ronan Abhamon <ronan.abhamon@vates.fr>

* grammar edits for anti-affinity

Co-authored-by: Jon Sands <fohdeesha@gmail.com>
2021-06-30 10:37:25 +02:00
Pierre Donias
3d15a73f1b feat(xo-web/vm/new disk): generate random name (#5828) 2021-06-28 11:26:09 +02:00
Julien Fontanet
bbd571e311 chore(xo-web/vm/tab-disks.js): format with Prettier 2021-06-28 11:25:31 +02:00
Pierre Donias
a7c554f033 feat(xo-web/snapshots): identify VM's parent snapshot (#5824)
See xoa-support#3775
2021-06-25 12:07:50 +02:00
Pierre Donias
25b4532ce3 feat: technical release (#5825) 2021-06-25 11:13:23 +02:00
Pierre Donias
a304f50a6b fix(xo-server-netbox): compare compact notations of IPv6 (#5822)
XAPI doesn't use IPv6 compact notation while Netbox automatically compacts them
on creation. Comparing those 2 notations makes XO believe that the IPs in
Netbox should be deleted and new ones should be created, even though they're
actually the same IPs. This change compacts the IPs before comparing them.
2021-06-24 17:00:07 +02:00
Pierre Donias
e75f476965 fix(xo-server-netbox): filter out devices' interfaces (#5821)
See xoa-support#3812

In Netbox, a device interface and a VM interface can have the same ID `x`,
which means that listing IPs with `assigned_object_id=x` won't only get the
VM's interface's IPs but also the device's interface's IPs. This made XO
believe that those extra IPs shouldn't exist and delete them. This change
makes sure to only grab VM interface IPs.
2021-06-23 15:27:11 +02:00
Julien Fontanet
1c31460d27 fix(xo-server/disconnectXenServer): delete pool association
This should prevent the *server is already connected* issue after reinstalling host.
2021-06-23 10:11:12 +02:00
Julien Fontanet
19db468bf0 fix(CHANGELOG.unreleased): vhd-lib
Introduced by aa4f1b834
2021-06-23 09:26:23 +02:00
Julien Fontanet
5fe05578c4 fix(xo-server/backupNg.importVmBackup): returns id of imported VM
Fixes #5820

Introduced by d9ce1b3a9.
2021-06-22 18:26:01 +02:00
Julien Fontanet
956f5a56cf feat(backups/RemoteAdapter#cleanVm): fix backup size if necessary
Fixes #5810
Fixes #5815
2021-06-22 18:16:52 +02:00
Julien Fontanet
a3f589d740 feat(@xen-orchestra/proxy): 0.14.3 2021-06-21 14:36:55 +02:00
Julien Fontanet
beef09bb6d feat(@xen-orchestra/backups): 0.11.2 2021-06-21 14:30:32 +02:00
Julien Fontanet
ff0a246c28 feat(proxy/api/ndJsonStream): handle iterable error 2021-06-21 14:26:55 +02:00
Julien Fontanet
f1459a1a52 fix(backups/VmBackup#_callWriters): writers.delete
Introduced by 56e4847b6
2021-06-21 14:26:55 +02:00
Mathieu
f3501acb64 feat(xo-web/vm/tab-disks): rescan ISO SRs (#5814)
See https://xcp-ng.org/forum/topic/4588/add-rescan-iso-sr-from-vm-menu
2021-06-18 16:15:33 +02:00
Ronan Abhamon
2238c98e95 feat(load-balancer): log vm and host names when a VM is migrated + category (density, performance, ...) (#5808)
Co-authored-by: Julien Fontanet <julien.fontanet@isonoe.net>
2021-06-18 09:49:33 +02:00
Julien Fontanet
9658d43f1f feat(xo-server-load-balancer): use @xen-orchestra/log 2021-06-18 09:44:37 +02:00
Julien Fontanet
1748a0c3e5 chore(xen-api): remove unused inject-events 2021-06-17 16:41:04 +02:00
Julien Fontanet
4463d81758 feat(@xen-orchestra/proxy): 0.14.2 2021-06-17 15:58:00 +02:00
Julien Fontanet
74221a4ab5 feat(@xen-orchestra/backups): 0.11.1 2021-06-17 15:57:10 +02:00
Julien Fontanet
0d998ed342 feat(@xen-orchestra/xapi): 0.6.4 2021-06-17 15:56:21 +02:00
Julien Fontanet
7d5a01756e feat(xen-api): 0.33.1 2021-06-17 15:55:20 +02:00
Pierre Donias
d66313406b fix(xo-web/new-vm): show correct amount of memory in summary (#5817) 2021-06-17 14:36:44 +02:00
Pierre Donias
d96a267191 docs(web-hooks): add "wait for response" and backup related doc (#5819)
See #5420
See #5360
2021-06-17 14:34:03 +02:00
Julien Fontanet
5467583bb3 fix(backups/_VmBackup#_callWriters): dont run single writer twice
Introduced by 56e4847b6

See https://xcp-ng.org/forum/topic/4659/backup-failed
2021-06-17 14:14:48 +02:00
Rajaa.BARHTAOUI
9a8138d07b fix(xo-server-perf-alert): smart mode: select only running VMs and hosts (#5811) 2021-06-17 11:56:04 +02:00
Pierre Donias
36c290ffea feat(xo-web/jobs): add host.emergencyShutdownHost to the methods list (#5818) 2021-06-17 11:55:51 +02:00
Julien Fontanet
3413bf9f64 fix(xen-api/{get,put}Resource): distinguish cancelation and connection issue (2)
Follow up of 057a1cbab
2021-06-17 10:12:09 +02:00
Julien Fontanet
3c352a3545 fix(backups/_VmBackup#_callWriters): missing writer var
Fixes #5816
2021-06-17 08:53:38 +02:00
Julien Fontanet
56e4847b6b feat(backups/_VmBackup#_callWriters): dont use generic error when only one writer 2021-06-16 10:15:10 +02:00
Julien Fontanet
033b671d0b fix(xo-server): limit number of xapiObjectToXo logs
See xoa-support#3830
2021-06-16 09:59:07 +02:00
Julien Fontanet
51f013851d feat(xen-api): limit concurrent calls to 20
Fixes xoa-support#3767

Can be changed via `callConcurrency` option.
2021-06-14 18:37:58 +02:00
Yannick Achy
dafa4ced27 feat(docs/backups): new concurrency model (#5701) 2021-06-14 16:38:29 +02:00
Pierre Donias
05fe154749 fix(xo-server/xapi): don't silently swallow errors on _callInstallationPlugin (#5809)
See xoa-support#3738

Introduced by a73acedc4d

This was done to prevent triggering an error when the pack was already
installed but a XENAPI_PLUGIN_FAILURE error can happen for other reasons
2021-06-14 16:01:02 +02:00
Nick Zana
5ddceb4660 fix(docs/from sources): change GitHub URL to use TLS (#5813) 2021-06-14 00:34:42 +02:00
Julien Fontanet
341a1b195c fix(docs): filenames in how to update self-signed cert
See xoa-support#3821
2021-06-11 17:09:23 +02:00
Julien Fontanet
29c3d1f9a6 feat(xo-web/debug): add timing 2021-06-11 10:08:14 +02:00
Rajaa.BARHTAOUI
734d4fb92b fix(xo-server#listPoolsMatchingCriteria): fix "unknown error from the peer" error (#5807)
See xoa-support#3489

Introduced by cd8c618f08
2021-06-08 17:00:45 +02:00
Julien Fontanet
057a1cbab6 feat(xen-api/{get,put}Resource): distringuish cancelation and connection issue
See xoa-support#3643
2021-06-05 01:15:36 +02:00
Pierre Donias
d44509b2cd fix(xo-server/xapi-object-to-xo/vm): handle space-delimited IP addresses (#5805)
Fixes #5801
2021-06-04 10:01:08 +02:00
Julien Fontanet
58cf69795a fix(xo-server): remove broken API methods
Introduced bybdb0ca836

These methods were linked to the legacy backups which are no longer supported.
2021-06-03 14:49:18 +02:00
Julien Fontanet
6d39512576 chore: format with Prettier
Introduced by 059843f03
2021-06-03 14:49:14 +02:00
Julien Fontanet
ec4dde86f5 fix(CHANGELOG.unreleased): add missing entries
Introduced by 1c91fb9dd
2021-06-02 16:55:45 +02:00
Nicolas Raynaud
1c91fb9dd5 feat(xo-{server,web}): improve OVA import error reporting (#5797) 2021-06-02 16:23:08 +02:00
Yannick Achy
cbd650c5ef feat(docs/troubleshooting): set xoa SSH password (#5798) 2021-06-02 09:50:29 +02:00
Julien Fontanet
c5a769cb29 fix(xo-server/glob-matcher): fix micromatch import
Introduced by 254558e9d
2021-05-31 17:36:47 +02:00
Julien Fontanet
00a7277377 feat(xo-server-sdn-controller): 1.0.5 2021-05-31 14:33:21 +02:00
BenjiReis
b8c32d41f5 fix(sdn-controller): dont assume all tunnels in private networks use the same device/vlan (#5793)
Fixes xoa-support#3771
2021-05-31 14:30:58 +02:00
Rajaa.BARHTAOUI
49c9fc79c7 feat(@vates/decorate-with): 0.1.0 (#5795) 2021-05-31 14:29:23 +02:00
Rajaa.BARHTAOUI
1284a7708e feat: release 5.59 (#5796) 2021-05-31 12:07:46 +02:00
Julien Fontanet
0dd8d15a9a fix(xo-web): use terser instead of uglify-es
Fixes https://xcp-ng.org/forum/topic/4638/yarn-build-failure

Better maintenance and support of modern ES features.
2021-05-28 15:38:25 +02:00
Julien Fontanet
90f59e954a fix(docs/from sources): clarify that Node >=14.17 is required
Related to 00beb6170
2021-05-28 15:14:28 +02:00
Julien Fontanet
03d7ec55a7 feat(decorate-with): decorateMethodsWith() 2021-05-28 12:15:22 +02:00
Julien Fontanet
1929b69145 chore(decorate-with): improve doc 2021-05-28 12:06:45 +02:00
Julien Fontanet
fbf194e4be chore(decorate-with): named function 2021-05-28 12:06:00 +02:00
Julien Fontanet
a20927343a chore: remove now unnecessary core-js deps
BREAKING CHANGE: @xen-orchestra/audit-core now requires Node >=10
2021-05-28 09:44:44 +02:00
Julien Fontanet
3b465dc09e fix: dont use deprecated fs-extra
BREAKING CHANGE: vhd-lib and xo-vmdk-to-vhd now require Node >=10
2021-05-28 09:39:51 +02:00
Julien Fontanet
fb8ca00ad1 fix: dont use deprecated event-to-promise 2021-05-28 09:34:49 +02:00
Julien Fontanet
dd7dddaa2b chore(xo-import-servers-csv): remove unmaintained tslint conf 2021-05-28 09:28:40 +02:00
Julien Fontanet
f41903c2a1 fix(xo-cli,xo-upload-ova}: dont use deprecated nice-pipe
BREAKING CHANGE: they now require Node >=10
2021-05-28 09:25:19 +02:00
Julien Fontanet
9984b5882d feat(@xen-orchestra/proxy): 0.14.1 2021-05-27 15:15:34 +02:00
Julien Fontanet
9ff20bee5a fix(proxy/package.json): fix bin and start script
Introduced by df9689854
2021-05-27 15:15:13 +02:00
Julien Fontanet
53caa11bc4 chore(proxy/package.json): remove useless main entry
This package is not a library.
2021-05-27 15:13:26 +02:00
Julien Fontanet
f6ac08567c feat(@xen-orchestra/xapi): 0.6.3 2021-05-27 15:04:13 +02:00
Julien Fontanet
040c6375c0 chore(xo-server/config.toml): remove unnecessary quotes 2021-05-27 15:00:59 +02:00
Julien Fontanet
a03266aaad feat(@xen-orchestra/proxy): 0.14.0 2021-05-27 14:28:29 +02:00
Julien Fontanet
3479064348 feat(xo-server-netbox): 0.1.1 2021-05-27 10:37:35 +02:00
Julien Fontanet
b02d823b30 fix(xo-server-netbox): fix dependencies 2021-05-27 10:37:35 +02:00
Julien Fontanet
a204b6fb3f feat(xo-server): 5.79.5 2021-05-26 17:51:07 +02:00
Rajaa.BARHTAOUI
c2450843a5 feat: technical release (#5790) 2021-05-26 16:52:20 +02:00
Julien Fontanet
00beb6170e fix(xo-server): require Node >=14.17
Fixes #5789

Better import of CommonJS modules.
2021-05-26 16:07:34 +02:00
Julien Fontanet
9f1a300d2a fix(backups): properly close streams are destroyed in case of failure
Fixes xoa-support#3753
2021-05-26 14:39:56 +02:00
342 changed files with 12398 additions and 6986 deletions

34
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,34 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: 'status: triaging :triangular_flag_on_post:, type: bug :bug:'
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- Node: [e.g. 16.12.1]
- xo-server: [e.g. 5.82.3]
- xo-web: [e.g. 5.87.0]
- hypervisor: [e.g. XCP-ng 8.2.0]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -0,0 +1 @@
../../scripts/npmignore

View File

@@ -0,0 +1,68 @@
<!-- DO NOT EDIT MANUALLY, THIS FILE HAS BEEN GENERATED -->
# @vates/async-each
[![Package Version](https://badgen.net/npm/v/@vates/async-each)](https://npmjs.org/package/@vates/async-each) ![License](https://badgen.net/npm/license/@vates/async-each) [![PackagePhobia](https://badgen.net/bundlephobia/minzip/@vates/async-each)](https://bundlephobia.com/result?p=@vates/async-each) [![Node compatibility](https://badgen.net/npm/node/@vates/async-each)](https://npmjs.org/package/@vates/async-each)
> Run async fn for each item in (async) iterable
## Install
Installation of the [npm package](https://npmjs.org/package/@vates/async-each):
```
> npm install --save @vates/async-each
```
## Usage
### `asyncEach(iterable, iteratee, [opts])`
Executes `iteratee` in order for each value yielded by `iterable`.
Returns a promise wich rejects as soon as a call to `iteratee` throws or a promise returned by it rejects, and which resolves when all promises returned by `iteratee` have resolved.
`iterable` must be an iterable or async iterable.
`iteratee` is called with the same `this` value as `asyncEach`, and with the following arguments:
- `value`: the value yielded by `iterable`
- `index`: the 0-based index for this value
- `iterable`: the iterable itself
`opts` is an object that can contains the following options:
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `1`
- `signal`: an abort signal to stop the iteration
- `stopOnError`: wether to stop iteration of first error, or wait for all calls to finish and throw an `AggregateError`, defaults to `true`
```js
import { asyncEach } from '@vates/async-each'
const contents = []
await asyncEach(
['foo.txt', 'bar.txt', 'baz.txt'],
async function (filename, i) {
contents[i] = await readFile(filename)
},
{
// reads two files at a time
concurrency: 2,
}
)
```
## Contributions
Contributions are _very_ welcomed, either on the documentation or on
the code.
You may:
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
[ISC](https://spdx.org/licenses/ISC) © [Vates SAS](https://vates.fr)

View File

@@ -0,0 +1,35 @@
### `asyncEach(iterable, iteratee, [opts])`
Executes `iteratee` in order for each value yielded by `iterable`.
Returns a promise wich rejects as soon as a call to `iteratee` throws or a promise returned by it rejects, and which resolves when all promises returned by `iteratee` have resolved.
`iterable` must be an iterable or async iterable.
`iteratee` is called with the same `this` value as `asyncEach`, and with the following arguments:
- `value`: the value yielded by `iterable`
- `index`: the 0-based index for this value
- `iterable`: the iterable itself
`opts` is an object that can contains the following options:
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `1`
- `signal`: an abort signal to stop the iteration
- `stopOnError`: wether to stop iteration of first error, or wait for all calls to finish and throw an `AggregateError`, defaults to `true`
```js
import { asyncEach } from '@vates/async-each'
const contents = []
await asyncEach(
['foo.txt', 'bar.txt', 'baz.txt'],
async function (filename, i) {
contents[i] = await readFile(filename)
},
{
// reads two files at a time
concurrency: 2,
}
)
```

View File

@@ -0,0 +1,99 @@
'use strict'
const noop = Function.prototype
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
exports.asyncEach = function asyncEach(iterable, iteratee, { concurrency = 1, signal, stopOnError = true } = {}) {
return new Promise((resolve, reject) => {
const it = (iterable[Symbol.iterator] || iterable[Symbol.asyncIterator]).call(iterable)
const errors = []
let running = 0
let index = 0
let onAbort
if (signal !== undefined) {
onAbort = () => {
onRejectedWrapper(new Error('asyncEach aborted'))
}
signal.addEventListener('abort', onAbort)
}
const clean = () => {
onFulfilled = onRejected = noop
if (onAbort !== undefined) {
signal.removeEventListener('abort', onAbort)
}
}
resolve = (resolve =>
function resolveAndClean(value) {
resolve(value)
clean()
})(resolve)
reject = (reject =>
function rejectAndClean(reason) {
reject(reason)
clean()
})(reject)
let onFulfilled = value => {
--running
next()
}
const onFulfilledWrapper = value => onFulfilled(value)
let onRejected = stopOnError
? reject
: error => {
--running
errors.push(error)
next()
}
const onRejectedWrapper = reason => onRejected(reason)
let nextIsRunning = false
let next = async () => {
if (nextIsRunning) {
return
}
nextIsRunning = true
if (running < concurrency) {
const cursor = await it.next()
if (cursor.done) {
next = () => {
if (running === 0) {
if (errors.length === 0) {
resolve()
} else {
reject(new AggregateError(errors))
}
}
}
} else {
++running
try {
const result = iteratee.call(this, cursor.value, index++, iterable)
let then
if (result != null && typeof result === 'object' && typeof (then = result.then) === 'function') {
then.call(result, onFulfilledWrapper, onRejectedWrapper)
} else {
onFulfilled(result)
}
} catch (error) {
onRejected(error)
}
}
nextIsRunning = false
return next()
}
nextIsRunning = false
}
next()
})
}

View File

@@ -0,0 +1,99 @@
'use strict'
/* eslint-env jest */
const { asyncEach } = require('./')
const randomDelay = (max = 10) =>
new Promise(resolve => {
setTimeout(resolve, Math.floor(Math.random() * max + 1))
})
const rejectionOf = p =>
new Promise((resolve, reject) => {
p.then(reject, resolve)
})
describe('asyncEach', () => {
const thisArg = 'qux'
const values = ['foo', 'bar', 'baz']
Object.entries({
'sync iterable': () => values,
'async iterable': async function* () {
for (const value of values) {
await randomDelay()
yield value
}
},
}).forEach(([what, getIterable]) =>
describe('with ' + what, () => {
let iterable
beforeEach(() => {
iterable = getIterable()
})
it('works', async () => {
const iteratee = jest.fn(async () => {})
await asyncEach.call(thisArg, iterable, iteratee)
expect(iteratee.mock.instances).toEqual(Array.from(values, () => thisArg))
expect(iteratee.mock.calls).toEqual(Array.from(values, (value, index) => [value, index, iterable]))
})
;[1, 2, 4].forEach(concurrency => {
it('respects a concurrency of ' + concurrency, async () => {
let running = 0
await asyncEach(
values,
async () => {
++running
expect(running).toBeLessThanOrEqual(concurrency)
await randomDelay()
--running
},
{ concurrency }
)
})
})
it('stops on first error when stopOnError is true', async () => {
const error = new Error()
const iteratee = jest.fn((_, i) => {
if (i === 1) {
throw error
}
})
expect(await rejectionOf(asyncEach(iterable, iteratee, { stopOnError: true }))).toBe(error)
expect(iteratee).toHaveBeenCalledTimes(2)
})
it('rejects AggregateError when stopOnError is false', async () => {
const errors = []
const iteratee = jest.fn(() => {
const error = new Error()
errors.push(error)
throw error
})
const error = await rejectionOf(asyncEach(iterable, iteratee, { stopOnError: false }))
expect(error.errors).toEqual(errors)
expect(iteratee.mock.calls).toEqual(Array.from(values, (value, index) => [value, index, iterable]))
})
it('can be interrupted with an AbortSignal', async () => {
const ac = new AbortController()
const iteratee = jest.fn((_, i) => {
if (i === 1) {
ac.abort()
}
})
await expect(asyncEach(iterable, iteratee, { signal: ac.signal })).rejects.toThrow('asyncEach aborted')
expect(iteratee).toHaveBeenCalledTimes(2)
})
})
)
})

View File

@@ -0,0 +1,34 @@
{
"private": false,
"name": "@vates/async-each",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@vates/async-each",
"description": "Run async fn for each item in (async) iterable",
"keywords": [
"array",
"async",
"collection",
"each",
"for",
"foreach",
"iterable",
"iterator"
],
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
"directory": "@vates/async-each",
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"author": {
"name": "Vates SAS",
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.0",
"engines": {
"node": ">=8.10"
},
"scripts": {
"postversion": "npm publish --access public"
}
}

View File

@@ -65,6 +65,23 @@ const f = compose(
)
```
Functions can receive extra parameters:
```js
const isIn = (value, min, max) => min <= value && value <= max
// Only compatible when `fns` is passed as an array!
const f = compose([
[add, 2],
[isIn, 3, 10],
])
console.log(f(1))
// → true
```
> Note: if the first function is defined with extra parameters, it will only receive the first value passed to the composed function, instead of all the parameters.
## Contributions
Contributions are _very_ welcomed, either on the documentation or on

View File

@@ -46,3 +46,20 @@ const f = compose(
[add2, mul3]
)
```
Functions can receive extra parameters:
```js
const isIn = (value, min, max) => min <= value && value <= max
// Only compatible when `fns` is passed as an array!
const f = compose([
[add, 2],
[isIn, 3, 10],
])
console.log(f(1))
// → true
```
> Note: if the first function is defined with extra parameters, it will only receive the first value passed to the composed function, instead of all the parameters.

View File

@@ -4,11 +4,13 @@ const defaultOpts = { async: false, right: false }
exports.compose = function compose(opts, fns) {
if (Array.isArray(opts)) {
fns = opts
fns = opts.slice() // don't mutate passed array
opts = defaultOpts
} else if (typeof opts === 'object') {
opts = Object.assign({}, defaultOpts, opts)
if (!Array.isArray(fns)) {
if (Array.isArray(fns)) {
fns = fns.slice() // don't mutate passed array
} else {
fns = Array.prototype.slice.call(arguments, 1)
}
} else {
@@ -20,6 +22,24 @@ exports.compose = function compose(opts, fns) {
if (n === 0) {
throw new TypeError('at least one function must be passed')
}
for (let i = 0; i < n; ++i) {
const entry = fns[i]
if (Array.isArray(entry)) {
const fn = entry[0]
const args = entry.slice()
args[0] = undefined
fns[i] = function composeWithArgs(value) {
args[0] = value
try {
return fn.apply(this, args)
} finally {
args[0] = undefined
}
}
}
}
if (n === 1) {
return fns[0]
}

View File

@@ -14,7 +14,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "2.0.0",
"version": "2.1.0",
"engines": {
"node": ">=7.6"
},

View File

@@ -16,7 +16,9 @@ Installation of the [npm package](https://npmjs.org/package/@vates/decorate-with
## Usage
For instance, allows using Lodash's functions as decorators:
### `decorateWith(fn, ...args)`
Creates a new ([legacy](https://babeljs.io/docs/en/babel-plugin-syntax-decorators#legacy)) method decorator from a function decorator, for instance, allows using Lodash's functions as decorators:
```js
import { decorateWith } from '@vates/decorate-with'
@@ -29,6 +31,64 @@ class Foo {
}
```
### `decorateMethodsWith(class, map)`
Decorates a number of methods directly, without using the decorator syntax:
```js
import { decorateMethodsWith } from '@vates/decorate-with'
class Foo {
bar() {
// body
}
baz() {
// body
}
}
decorateMethodsWith(Foo, {
// without arguments
bar: lodash.curry,
// with arguments
baz: [lodash.debounce, 150],
})
```
The decorated class is returned, so you can export it directly.
To apply multiple transforms to a method, you can either call `decorateMethodsWith` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
```js
decorateMethodsWith(Foo, {
bar: compose([
[lodash.debounce, 150]
lodash.curry,
])
})
```
### `perInstance(fn, ...args)`
Helper to decorate the method by instance instead of for the whole class.
This is often necessary for caching or deduplicating calls.
```js
import { perInstance } from '@vates/decorateWith'
class Foo {
@decorateWith(perInstance, lodash.memoize)
bar() {
// body
}
}
```
Because it's a normal function, it can also be used with `decorateMethodsWith`, with `compose` or even by itself.
## Contributions
Contributions are _very_ welcomed, either on the documentation or on

View File

@@ -1,4 +1,6 @@
For instance, allows using Lodash's functions as decorators:
### `decorateWith(fn, ...args)`
Creates a new ([legacy](https://babeljs.io/docs/en/babel-plugin-syntax-decorators#legacy)) method decorator from a function decorator, for instance, allows using Lodash's functions as decorators:
```js
import { decorateWith } from '@vates/decorate-with'
@@ -10,3 +12,61 @@ class Foo {
}
}
```
### `decorateMethodsWith(class, map)`
Decorates a number of methods directly, without using the decorator syntax:
```js
import { decorateMethodsWith } from '@vates/decorate-with'
class Foo {
bar() {
// body
}
baz() {
// body
}
}
decorateMethodsWith(Foo, {
// without arguments
bar: lodash.curry,
// with arguments
baz: [lodash.debounce, 150],
})
```
The decorated class is returned, so you can export it directly.
To apply multiple transforms to a method, you can either call `decorateMethodsWith` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
```js
decorateMethodsWith(Foo, {
bar: compose([
[lodash.debounce, 150]
lodash.curry,
])
})
```
### `perInstance(fn, ...args)`
Helper to decorate the method by instance instead of for the whole class.
This is often necessary for caching or deduplicating calls.
```js
import { perInstance } from '@vates/decorateWith'
class Foo {
@decorateWith(perInstance, lodash.memoize)
bar() {
// body
}
}
```
Because it's a normal function, it can also be used with `decorateMethodsWith`, with `compose` or even by itself.

View File

@@ -1,4 +1,33 @@
exports.decorateWith = (fn, ...args) => (target, name, descriptor) => ({
...descriptor,
value: fn(descriptor.value, ...args),
})
exports.decorateWith = function decorateWith(fn, ...args) {
return (target, name, descriptor) => ({
...descriptor,
value: fn(descriptor.value, ...args),
})
}
const { getOwnPropertyDescriptor, defineProperty } = Object
exports.decorateMethodsWith = function decorateMethodsWith(klass, map) {
const { prototype } = klass
for (const name of Object.keys(map)) {
const descriptor = getOwnPropertyDescriptor(prototype, name)
const { value } = descriptor
const decorator = map[name]
descriptor.value = typeof decorator === 'function' ? decorator(value) : decorator[0](value, ...decorator.slice(1))
defineProperty(prototype, name, descriptor)
}
return klass
}
exports.perInstance = function perInstance(fn, decorator, ...args) {
const map = new WeakMap()
return function () {
let decorated = map.get(this)
if (decorated === undefined) {
decorated = decorator(fn, ...args)
map.set(this, decorated)
}
return decorated.apply(this, arguments)
}
}

View File

@@ -20,7 +20,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.0.1",
"version": "1.0.0",
"engines": {
"node": ">=8.10"
},

View File

@@ -24,7 +24,7 @@
"dependencies": {
"@vates/multi-key-map": "^0.1.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.2.0",
"@xen-orchestra/log": "^0.3.0",
"ensure-array": "^1.0.0"
}
}

View File

@@ -18,17 +18,6 @@ const wrapCall = (fn, arg, thisArg) => {
* @returns {Promise<Item[]>}
*/
exports.asyncMap = function asyncMap(iterable, mapFn, thisArg = iterable) {
let onError
if (onError !== undefined) {
const original = mapFn
mapFn = async function () {
try {
return await original.apply(this, arguments)
} catch (error) {
return onError.call(this, error, ...arguments)
}
}
}
return Promise.all(Array.from(iterable, mapFn, thisArg))
}

View File

@@ -9,7 +9,7 @@
},
"version": "0.2.0",
"engines": {
"node": ">=8.10"
"node": ">=10"
},
"main": "dist/",
"scripts": {
@@ -30,9 +30,8 @@
"rimraf": "^3.0.0"
},
"dependencies": {
"@vates/decorate-with": "^0.0.1",
"@xen-orchestra/log": "^0.2.0",
"core-js": "^3.6.4",
"@vates/decorate-with": "^1.0.0",
"@xen-orchestra/log": "^0.3.0",
"golike-defer": "^0.5.1",
"object-hash": "^2.0.1"
},

View File

@@ -1,6 +1,3 @@
// see https://github.com/babel/babel/issues/8450
import 'core-js/features/symbol/async-iterator'
import assert from 'assert'
import hash from 'object-hash'
import { createLogger } from '@xen-orchestra/log'

View File

@@ -17,10 +17,10 @@ interface Record {
}
export class AuditCore {
constructor(storage: Storage) { }
public add(subject: any, event: string, data: any): Promise<Record> { }
public checkIntegrity(oldest: string, newest: string): Promise<number> { }
public getFrom(newest?: string): AsyncIterator { }
public deleteFrom(newest: string): Promise<void> { }
public deleteRangeAndRewrite(newest: string, oldest: string): Promise<void> { }
constructor(storage: Storage) {}
public add(subject: any, event: string, data: any): Promise<Record> {}
public checkIntegrity(oldest: string, newest: string): Promise<number> {}
public getFrom(newest?: string): AsyncIterator {}
public deleteFrom(newest: string): Promise<void> {}
public deleteRangeAndRewrite(newest: string, oldest: string): Promise<void> {}
}

View File

@@ -46,7 +46,7 @@ module.exports = function (pkg, configs = {}) {
return {
comments: !__PROD__,
ignore: __PROD__ ? [/\.spec\.js$/] : undefined,
ignore: __PROD__ ? [/\btests?\//, /\.spec\.js$/] : undefined,
plugins: Object.keys(plugins)
.map(plugin => [plugin, plugins[plugin]])
.sort(([a], [b]) => {

View File

@@ -10,12 +10,13 @@ const { resolve } = require('path')
const adapter = new RemoteAdapter(require('@xen-orchestra/fs').getHandler({ url: 'file://' }))
module.exports = async function main(args) {
const { _, remove, merge } = getopts(args, {
const { _, fix, remove, merge } = getopts(args, {
alias: {
fix: 'f',
remove: 'r',
merge: 'm',
},
boolean: ['merge', 'remove'],
boolean: ['fix', 'merge', 'remove'],
default: {
merge: false,
remove: false,
@@ -25,7 +26,7 @@ module.exports = async function main(args) {
await asyncMap(_, async vmDir => {
vmDir = resolve(vmDir)
try {
await adapter.cleanVm(vmDir, { remove, merge, onLog: log => console.warn(log) })
await adapter.cleanVm(vmDir, { fixMetadata: fix, remove, merge, onLog: (...args) => console.warn(...args) })
} catch (error) {
console.error('adapter.cleanVm', vmDir, error)
}

View File

@@ -5,11 +5,12 @@ require('./_composeCommands')({
get main() {
return require('./commands/clean-vms')
},
usage: `[--merge] [--remove] xo-vm-backups/*
usage: `[--fix] [--merge] [--remove] xo-vm-backups/*
Detects and repair issues with VM backups.
Options:
-f, --fix Fix metadata issues (like size)
-m, --merge Merge (or continue merging) VHD files that are unused
-r, --remove Remove unused, incomplete, orphan, or corrupted files
`,

View File

@@ -7,12 +7,12 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.11.0",
"@xen-orchestra/fs": "^0.17.0",
"@xen-orchestra/backups": "^0.18.3",
"@xen-orchestra/fs": "^0.19.3",
"filenamify": "^4.1.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
"promise-toolbox": "^0.19.2"
"promise-toolbox": "^0.20.0"
},
"engines": {
"node": ">=7.10.1"
@@ -27,7 +27,7 @@
"scripts": {
"postversion": "npm publish --access public"
},
"version": "0.6.0",
"version": "0.6.1",
"license": "AGPL-3.0-or-later",
"author": {
"name": "Vates SAS",

View File

@@ -3,19 +3,20 @@ const Disposable = require('promise-toolbox/Disposable.js')
const fromCallback = require('promise-toolbox/fromCallback.js')
const fromEvent = require('promise-toolbox/fromEvent.js')
const pDefer = require('promise-toolbox/defer.js')
const pump = require('pump')
const { basename, dirname, join, normalize, resolve } = require('path')
const groupBy = require('lodash/groupBy.js')
const { dirname, join, normalize, resolve } = require('path')
const { createLogger } = require('@xen-orchestra/log')
const { createSyntheticStream, mergeVhd, default: Vhd } = require('vhd-lib')
const { Constants, createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } = require('vhd-lib')
const { deduped } = require('@vates/disposable/deduped.js')
const { execFile } = require('child_process')
const { readdir, stat } = require('fs-extra')
const { v4: uuidv4 } = require('uuid')
const { ZipFile } = require('yazl')
const { BACKUP_DIR } = require('./_getVmBackupDir.js')
const { cleanVm } = require('./_cleanVm.js')
const { getTmpDir } = require('./_getTmpDir.js')
const { isMetadataFile, isVhdFile } = require('./_backupType.js')
const { isMetadataFile } = require('./_backupType.js')
const { isValidXva } = require('./_isValidXva.js')
const { listPartitions, LVM_PARTITION_TYPE } = require('./_listPartitions.js')
const { lvs, pvs } = require('./_lvm.js')
@@ -67,58 +68,17 @@ const debounceResourceFactory = factory =>
}
class RemoteAdapter {
constructor(handler, { debounceResource = res => res, dirMode } = {}) {
constructor(handler, { debounceResource = res => res, dirMode, vhdDirectoryCompression } = {}) {
this._debounceResource = debounceResource
this._dirMode = dirMode
this._handler = handler
this._vhdDirectoryCompression = vhdDirectoryCompression
}
get handler() {
return this._handler
}
async _deleteVhd(path) {
const handler = this._handler
const vhds = await asyncMapSettled(
await handler.list(dirname(path), {
filter: isVhdFile,
prependDir: true,
}),
async path => {
try {
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
return {
footer: vhd.footer,
header: vhd.header,
path,
}
} catch (error) {
// Do not fail on corrupted VHDs (usually uncleaned temporary files),
// they are probably inconsequent to the backup process and should not
// fail it.
warn(`BackupNg#_deleteVhd ${path}`, { error })
}
}
)
const base = basename(path)
const child = vhds.find(_ => _ !== undefined && _.header.parentUnicodeName === base)
if (child === undefined) {
await handler.unlink(path)
return 0
}
try {
const childPath = child.path
const mergedDataSize = await mergeVhd(handler, path, handler, childPath)
await handler.rename(path, childPath)
return mergedDataSize
} catch (error) {
handler.unlink(path).catch(warn)
throw error
}
}
async _findPartition(devicePath, partitionId) {
const partitions = await listPartitions(devicePath)
const partition = partitions.find(_ => _.id === partitionId)
@@ -232,6 +192,22 @@ class RemoteAdapter {
return files
}
// check if we will be allowed to merge a a vhd created in this adapter
// with the vhd at path `path`
async isMergeableParent(packedParentUid, path) {
return await Disposable.use(openVhd(this.handler, path), vhd => {
// this baseUuid is not linked with this vhd
if (!vhd.footer.uuid.equals(packedParentUid)) {
return false
}
const isVhdDirectory = vhd instanceof VhdDirectory
return isVhdDirectory
? this.#useVhdDirectory() && this.#getCompressionType() === vhd.compressionType
: !this.#useVhdDirectory()
})
}
fetchPartitionFiles(diskId, partitionId, paths) {
const { promise, reject, resolve } = pDefer()
Disposable.use(
@@ -253,16 +229,9 @@ class RemoteAdapter {
async deleteDeltaVmBackups(backups) {
const handler = this._handler
let mergedDataSize = 0
await asyncMapSettled(backups, ({ _filename, vhds }) =>
Promise.all([
handler.unlink(_filename),
asyncMap(Object.values(vhds), async _ => {
mergedDataSize += await this._deleteVhd(resolveRelativeFromFile(_filename, _))
}),
])
)
return mergedDataSize
// unused VHDs will be detected by `cleanVm`
await asyncMapSettled(backups, ({ _filename }) => VhdAbstract.unlink(handler, _filename))
}
async deleteMetadataBackup(backupId) {
@@ -292,17 +261,34 @@ class RemoteAdapter {
)
}
async deleteVmBackup(filename) {
const metadata = JSON.parse(String(await this._handler.readFile(filename)))
metadata._filename = filename
deleteVmBackup(file) {
return this.deleteVmBackups([file])
}
if (metadata.mode === 'delta') {
await this.deleteDeltaVmBackups([metadata])
} else if (metadata.mode === 'full') {
await this.deleteFullVmBackups([metadata])
} else {
throw new Error(`no deleter for backup mode ${metadata.mode}`)
async deleteVmBackups(files) {
const { delta, full, ...others } = groupBy(await asyncMap(files, file => this.readVmBackupMetadata(file)), 'mode')
const unsupportedModes = Object.keys(others)
if (unsupportedModes.length !== 0) {
throw new Error('no deleter for backup modes: ' + unsupportedModes.join(', '))
}
await Promise.all([
delta !== undefined && this.deleteDeltaVmBackups(delta),
full !== undefined && this.deleteFullVmBackups(full),
])
}
#getCompressionType() {
return this._vhdDirectoryCompression
}
#useVhdDirectory() {
return this.handler.type === 's3'
}
#useAlias() {
return this.#useVhdDirectory()
}
getDisk = Disposable.factory(this.getDisk)
@@ -361,6 +347,14 @@ class RemoteAdapter {
return yield this._getPartition(devicePath, await this._findPartition(devicePath, partitionId))
}
// if we use alias on this remote, we have to name the file alias.vhd
getVhdFileName(baseName) {
if (this.#useAlias()) {
return `${baseName}.alias.vhd`
}
return `${baseName}.vhd`
}
async listAllVmBackups() {
const handler = this._handler
@@ -505,6 +499,25 @@ class RemoteAdapter {
return backups.sort(compareTimestamp)
}
async writeVhd(path, input, { checksum = true, validator = noop } = {}) {
const handler = this._handler
if (this.#useVhdDirectory()) {
const dataPath = `${dirname(path)}/data/${uuidv4()}.vhd`
await createVhdDirectoryFromStream(handler, dataPath, input, {
concurrency: 16,
compression: this.#getCompressionType(),
async validator() {
await input.task
return validator.apply(this, arguments)
},
})
await VhdAbstract.createAlias(handler, path, dataPath)
} else {
await this.outputStream(path, input, { checksum, validator })
}
}
async outputStream(path, input, { checksum = true, validator = noop } = {}) {
await this._handler.outputStream(path, input, {
checksum,
@@ -516,6 +529,52 @@ class RemoteAdapter {
})
}
async _createSyntheticStream(handler, paths) {
let disposableVhds = []
// if it's a path : open all hierarchy of parent
if (typeof paths === 'string') {
let vhd,
vhdPath = paths
do {
const disposable = await openVhd(handler, vhdPath)
vhd = disposable.value
disposableVhds.push(disposable)
vhdPath = resolveRelativeFromFile(vhdPath, vhd.header.parentUnicodeName)
} while (vhd.footer.diskType !== Constants.DISK_TYPES.DYNAMIC)
} else {
// only open the list of path given
disposableVhds = paths.map(path => openVhd(handler, path))
}
// I don't want the vhds to be disposed on return
// but only when the stream is done ( or failed )
const disposables = await Disposable.all(disposableVhds)
const vhds = disposables.value
let disposed = false
const disposeOnce = async () => {
if (!disposed) {
disposed = true
try {
await disposables.dispose()
} catch (error) {
warn('_createSyntheticStream: failed to dispose VHDs', { error })
}
}
}
const synthetic = new VhdSynthetic(vhds)
await synthetic.readHeaderAndFooter()
await synthetic.readBlockAllocationTable()
const stream = await synthetic.stream()
stream.on('end', disposeOnce)
stream.on('close', disposeOnce)
stream.on('error', disposeOnce)
return stream
}
async readDeltaVmBackup(metadata) {
const handler = this._handler
const { vbds, vdis, vhds, vifs, vm } = metadata
@@ -523,7 +582,7 @@ class RemoteAdapter {
const streams = {}
await asyncMapSettled(Object.keys(vdis), async id => {
streams[`${id}.vhd`] = await createSyntheticStream(handler, join(dir, vhds[id]))
streams[`${id}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[id]))
})
return {
@@ -543,40 +602,6 @@ class RemoteAdapter {
async readVmBackupMetadata(path) {
return Object.defineProperty(JSON.parse(await this._handler.readFile(path)), '_filename', { value: path })
}
async writeFullVmBackup({ jobId, mode, scheduleId, timestamp, vm, vmSnapshot, xva }, sizeContainer, stream) {
const basename = formatFilenameDate(timestamp)
const dataBasename = basename + '.xva'
const dataFilename = backupDir + '/' + dataBasename
const metadataFilename = `${backupDir}/${basename}.json`
const metadata = {
jobId: job.id,
mode: job.mode,
scheduleId,
timestamp,
version: '2.0.0',
vm,
vmSnapshot: this._backup.exportedVm,
xva: './' + dataBasename,
}
const { deleteFirst } = settings
if (deleteFirst) {
await deleteOldBackups()
}
await adapter.outputStream(stream, dataFilename, {
validator: tmpPath => {
if (handler._getFilePath !== undefined) {
return isValidXva(handler._getFilePath('/' + tmpPath))
}
},
})
metadata.size = sizeContainer.size
await handler.outputFile(metadataFilename, JSON.stringify(metadata))
}
}
Object.assign(RemoteAdapter.prototype, {

View File

@@ -1,11 +1,8 @@
const CancelToken = require('promise-toolbox/CancelToken.js')
const Zone = require('node-zone')
const logAfterEnd = function (log) {
const error = new Error('task has already ended:' + this.id)
error.result = log.result
error.log = log
throw error
const logAfterEnd = () => {
throw new Error('task has already ended')
}
const noop = Function.prototype
@@ -47,19 +44,11 @@ class Task {
}
}
get id() {
return this.#id
}
#cancelToken
#id = Math.random().toString(36).slice(2)
#onLog
#zone
get id() {
return this.#id
}
constructor({ name, data, onLog }) {
let parentCancelToken, parentId
if (onLog === undefined) {
@@ -111,8 +100,6 @@ class Task {
run(fn, last = false) {
return this.#zone.run(() => {
try {
this.#cancelToken.throwIfRequested()
const result = fn()
let then
if (result != null && typeof (then = result.then) === 'function') {

View File

@@ -1,10 +1,10 @@
const assert = require('assert')
// const asyncFn = require('promise-toolbox/asyncFn')
const findLast = require('lodash/findLast.js')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors.js')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
@@ -36,6 +36,11 @@ const forkDeltaExport = deltaExport =>
exports.VmBackup = class VmBackup {
constructor({ config, getSnapshotNameLabel, job, remoteAdapters, remotes, schedule, settings, srs, vm }) {
if (vm.other_config['xo:backup:job'] === job.id) {
// otherwise replicated VMs would be matched and replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
@@ -104,9 +109,21 @@ exports.VmBackup = class VmBackup {
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, warnMessage, parallel = true) {
const writers = this._writers
if (writers.size === 0) {
const n = writers.size
if (n === 0) {
return
}
if (n === 1) {
const [writer] = writers
try {
await fn(writer)
} catch (error) {
writers.delete(writer)
throw error
}
return
}
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await fn(writer)
@@ -144,7 +161,6 @@ exports.VmBackup = class VmBackup {
const doSnapshot =
this._isDelta || (!settings.offlineBackup && vm.power_state === 'Running') || settings.snapshotRetention !== 0
console.log({ doSnapshot })
if (doSnapshot) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
@@ -183,7 +199,6 @@ exports.VmBackup = class VmBackup {
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportDeltaVm(exportedVm, baseVm, {
cancelToken: Task.cancelToken,
fullVdisRequired,
})
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
@@ -229,7 +244,6 @@ exports.VmBackup = class VmBackup {
async _copyFull() {
const { compression } = this.job
const stream = await this._xapi.VM_export(this.exportedVm.$ref, {
cancelToken: Task.cancelToken,
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
@@ -276,17 +290,28 @@ exports.VmBackup = class VmBackup {
}
async _removeUnusedSnapshots() {
// TODO: handle all schedules (no longer existing schedules default to 0 retention)
const { scheduleId } = this
const scheduleSnapshots = this._jobSnapshots.filter(_ => _.other_config['xo:backup:schedule'] === scheduleId)
const jobSettings = this.job.settings
const baseVmRef = this._baseVm?.$ref
const { config } = this
const baseSettings = {
...config.defaultSettings,
...config.metadata.defaultSettings,
...jobSettings[''],
}
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(getOldEntries(this._settings.snapshotRetention, scheduleSnapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...jobSettings[scheduleId],
...jobSettings[this.vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
@@ -295,12 +320,14 @@ exports.VmBackup = class VmBackup {
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
@@ -311,10 +338,17 @@ exports.VmBackup = class VmBackup {
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const snapshotOf = await xapi.getField('VDI', baseRef, 'snapshot_of')
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(await xapi.getField('VDI', baseRef, 'uuid'), srcVdi)
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
@@ -325,31 +359,33 @@ exports.VmBackup = class VmBackup {
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (!presentBaseVdis.has(baseUuid)) {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
Task.info('base data', {
vm: baseVm.uuid,
fullVdisRequired: Array.from(fullVdisRequired),
})
}
run = defer(this.run)
async run($defer) {
this.exportedVm = this.vm
this.timestamp = Date.now()
const doSnapshot = this._isDelta || vm.power_state === 'Running' || settings.snapshotRetention !== 0
if (!this._isDelta) {
}
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
@@ -396,6 +432,3 @@ exports.VmBackup = class VmBackup {
}
}
}
// const { prototype } = exports.VmBackup
// prototype.run = asyncFn.cancelable(prototype.run)

View File

@@ -70,6 +70,7 @@ class BackupWorker {
yield new RemoteAdapter(handler, {
debounceResource: this.debounceResource,
dirMode: this.#config.dirMode,
vhdDirectoryCompression: this.#config.vhdDirectoryCompression,
})
} finally {
await handler.forget()

View File

@@ -0,0 +1,407 @@
/* eslint-env jest */
const rimraf = require('rimraf')
const tmp = require('tmp')
const fs = require('fs-extra')
const { getHandler } = require('@xen-orchestra/fs')
const { pFromCallback } = require('promise-toolbox')
const crypto = require('crypto')
const { RemoteAdapter } = require('./RemoteAdapter')
const { VHDFOOTER, VHDHEADER } = require('./tests.fixtures.js')
const { VhdFile, Constants, VhdDirectory, VhdAbstract } = require('vhd-lib')
let tempDir, adapter, handler, jobId, vdiId, basePath
jest.setTimeout(60000)
beforeEach(async () => {
tempDir = await pFromCallback(cb => tmp.dir(cb))
handler = getHandler({ url: `file://${tempDir}` })
await handler.sync()
adapter = new RemoteAdapter(handler)
jobId = uniqueId()
vdiId = uniqueId()
basePath = `vdis/${jobId}/${vdiId}`
await fs.mkdirp(`${tempDir}/${basePath}`)
})
afterEach(async () => {
await pFromCallback(cb => rimraf(tempDir, cb))
await handler.forget()
})
const uniqueId = () => crypto.randomBytes(16).toString('hex')
async function generateVhd(path, opts = {}) {
let vhd
const dataPath = opts.useAlias ? path + '.data' : path
if (opts.mode === 'directory') {
await handler.mkdir(dataPath)
vhd = new VhdDirectory(handler, dataPath)
} else {
const fd = await handler.openFile(dataPath, 'wx')
vhd = new VhdFile(handler, fd)
}
vhd.header = { ...VHDHEADER, ...opts.header }
vhd.footer = { ...VHDFOOTER, ...opts.footer }
vhd.footer.uuid = Buffer.from(crypto.randomBytes(16))
if (vhd.header.parentUnicodeName) {
vhd.footer.diskType = Constants.DISK_TYPES.DIFFERENCING
} else {
vhd.footer.diskType = Constants.DISK_TYPES.DYNAMIC
}
if (opts.useAlias === true) {
await VhdAbstract.createAlias(handler, path + '.alias.vhd', dataPath)
}
await vhd.writeBlockAllocationTable()
await vhd.writeHeader()
await vhd.writeFooter()
return vhd
}
test('It remove broken vhd', async () => {
// todo also tests a directory and an alias
await handler.writeFile(`${basePath}/notReallyAVhd.vhd`, 'I AM NOT A VHD')
expect((await handler.list(basePath)).length).toEqual(1)
let loggued = ''
const onLog = message => {
loggued += message
}
await adapter.cleanVm('/', { remove: false, onLog })
expect(loggued).toEqual(`error while checking the VHD with path /${basePath}/notReallyAVhd.vhd`)
// not removed
expect((await handler.list(basePath)).length).toEqual(1)
// really remove it
await adapter.cleanVm('/', { remove: true, onLog })
expect((await handler.list(basePath)).length).toEqual(0)
})
test('it remove vhd with missing or multiple ancestors', async () => {
// one with a broken parent
await generateVhd(`${basePath}/abandonned.vhd`, {
header: {
parentUnicodeName: 'gone.vhd',
parentUid: Buffer.from(crypto.randomBytes(16)),
},
})
// one orphan, which is a full vhd, no parent
const orphan = await generateVhd(`${basePath}/orphan.vhd`)
// a child to the orphan
await generateVhd(`${basePath}/child.vhd`, {
header: {
parentUnicodeName: 'orphan.vhd',
parentUid: orphan.footer.uuid,
},
})
// clean
let loggued = ''
const onLog = message => {
loggued += message + '\n'
}
await adapter.cleanVm('/', { remove: true, onLog })
const deletedOrphanVhd = loggued.match(/deleting orphan VHD/g) || []
expect(deletedOrphanVhd.length).toEqual(1) // only one vhd should have been deleted
const deletedAbandonnedVhd = loggued.match(/abandonned.vhd is missing/g) || []
expect(deletedAbandonnedVhd.length).toEqual(1) // and it must be abandonned.vhd
// we don't test the filew on disk, since they will all be marker as unused and deleted without a metadata.json file
})
test('it remove backup meta data referencing a missing vhd in delta backup', async () => {
// create a metadata file marking child and orphan as ok
await handler.writeFile(
`metadata.json`,
JSON.stringify({
mode: 'delta',
vhds: [
`${basePath}/orphan.vhd`,
`${basePath}/child.vhd`,
// abandonned.json is not here
],
})
)
await generateVhd(`${basePath}/abandonned.vhd`)
// one orphan, which is a full vhd, no parent
const orphan = await generateVhd(`${basePath}/orphan.vhd`)
// a child to the orphan
await generateVhd(`${basePath}/child.vhd`, {
header: {
parentUnicodeName: 'orphan.vhd',
parentUid: orphan.footer.uuid,
},
})
let loggued = ''
const onLog = message => {
loggued += message + '\n'
}
await adapter.cleanVm('/', { remove: true, onLog })
let matched = loggued.match(/deleting unused VHD /g) || []
expect(matched.length).toEqual(1) // only one vhd should have been deleted
matched = loggued.match(/abandonned.vhd is unused/g) || []
expect(matched.length).toEqual(1) // and it must be abandonned.vhd
// a missing vhd cause clean to remove all vhds
await handler.writeFile(
`metadata.json`,
JSON.stringify({
mode: 'delta',
vhds: [
`${basePath}/deleted.vhd`, // in metadata but not in vhds
`${basePath}/orphan.vhd`,
`${basePath}/child.vhd`,
// abandonned.json is not here
],
}),
{ flags: 'w' }
)
loggued = ''
await adapter.cleanVm('/', { remove: true, onLog })
matched = loggued.match(/deleting unused VHD /g) || []
expect(matched.length).toEqual(2) // all vhds (orphan and child ) should have been deleted
})
test('it merges delta of non destroyed chain', async () => {
await handler.writeFile(
`metadata.json`,
JSON.stringify({
mode: 'delta',
size: 12000, // a size too small
vhds: [
`${basePath}/grandchild.vhd`, // grand child should not be merged
`${basePath}/child.vhd`,
// orphan is not here, he should be merged in child
],
})
)
// one orphan, which is a full vhd, no parent
const orphan = await generateVhd(`${basePath}/orphan.vhd`)
// a child to the orphan
const child = await generateVhd(`${basePath}/child.vhd`, {
header: {
parentUnicodeName: 'orphan.vhd',
parentUid: orphan.footer.uuid,
},
})
// a grand child
await generateVhd(`${basePath}/grandchild.vhd`, {
header: {
parentUnicodeName: 'child.vhd',
parentUid: child.footer.uuid,
},
})
let loggued = []
const onLog = message => {
loggued.push(message)
}
await adapter.cleanVm('/', { remove: true, onLog })
expect(loggued[0]).toEqual(`the parent /${basePath}/orphan.vhd of the child /${basePath}/child.vhd is unused`)
expect(loggued[1]).toEqual(`incorrect size in metadata: 12000 instead of 209920`)
loggued = []
await adapter.cleanVm('/', { remove: true, merge: true, onLog })
const [unused, merging] = loggued
expect(unused).toEqual(`the parent /${basePath}/orphan.vhd of the child /${basePath}/child.vhd is unused`)
expect(merging).toEqual(`merging /${basePath}/child.vhd into /${basePath}/orphan.vhd`)
const metadata = JSON.parse(await handler.readFile(`metadata.json`))
// size should be the size of children + grand children after the merge
expect(metadata.size).toEqual(209920)
// merging is already tested in vhd-lib, don't retest it here (and theses vhd are as empty as my stomach at 12h12)
// only check deletion
const remainingVhds = await handler.list(basePath)
expect(remainingVhds.length).toEqual(2)
expect(remainingVhds.includes('child.vhd')).toEqual(true)
expect(remainingVhds.includes('grandchild.vhd')).toEqual(true)
})
test('it finish unterminated merge ', async () => {
await handler.writeFile(
`metadata.json`,
JSON.stringify({
mode: 'delta',
size: undefined,
vhds: [
`${basePath}/orphan.vhd`, // grand child should not be merged
`${basePath}/child.vhd`,
// orphan is not here, he should be merged in child
],
})
)
// one orphan, which is a full vhd, no parent
const orphan = await generateVhd(`${basePath}/orphan.vhd`)
// a child to the orphan
const child = await generateVhd(`${basePath}/child.vhd`, {
header: {
parentUnicodeName: 'orphan.vhd',
parentUid: orphan.footer.uuid,
},
})
// a merge in progress file
await handler.writeFile(
`${basePath}/.orphan.vhd.merge.json`,
JSON.stringify({
parent: {
header: orphan.header.checksum,
},
child: {
header: child.header.checksum,
},
})
)
// a unfinished merging
await adapter.cleanVm('/', { remove: true, merge: true })
// merging is already tested in vhd-lib, don't retest it here (and theses vhd are as empty as my stomach at 12h12)
// only check deletion
const remainingVhds = await handler.list(basePath)
expect(remainingVhds.length).toEqual(1)
expect(remainingVhds.includes('child.vhd')).toEqual(true)
})
// each of the vhd can be a file, a directory, an alias to a file or an alias to a directory
// the message an resulting files should be identical to the output with vhd files which is tested independantly
describe('tests mulitple combination ', () => {
for (const useAlias of [true, false]) {
for (const vhdMode of ['file', 'directory']) {
test(`alias : ${useAlias}, mode: ${vhdMode}`, async () => {
// a broken VHD
const brokenVhdDataPath = basePath + useAlias ? 'broken.data' : 'broken.vhd'
if (vhdMode === 'directory') {
await handler.mkdir(brokenVhdDataPath)
} else {
await handler.writeFile(brokenVhdDataPath, 'notreallyavhd')
}
if (useAlias) {
await VhdAbstract.createAlias(handler, 'broken.alias.vhd', brokenVhdDataPath)
}
// a vhd non referenced in metada
await generateVhd(`${basePath}/nonreference.vhd`, { useAlias, mode: vhdMode })
// an abandonded delta vhd without its parent
await generateVhd(`${basePath}/abandonned.vhd`, {
useAlias,
mode: vhdMode,
header: {
parentUnicodeName: 'gone.vhd',
parentUid: crypto.randomBytes(16),
},
})
// an ancestor of a vhd present in metadata
const ancestor = await generateVhd(`${basePath}/ancestor.vhd`, {
useAlias,
mode: vhdMode,
})
const child = await generateVhd(`${basePath}/child.vhd`, {
useAlias,
mode: vhdMode,
header: {
parentUnicodeName: 'ancestor.vhd' + (useAlias ? '.alias.vhd' : ''),
parentUid: ancestor.footer.uuid,
},
})
// a grand child vhd in metadata
await generateVhd(`${basePath}/grandchild.vhd`, {
useAlias,
mode: vhdMode,
header: {
parentUnicodeName: 'child.vhd' + (useAlias ? '.alias.vhd' : ''),
parentUid: child.footer.uuid,
},
})
// an older parent that was merging in clean
const cleanAncestor = await generateVhd(`${basePath}/cleanAncestor.vhd`, {
useAlias,
mode: vhdMode,
})
// a clean vhd in metadata
const clean = await generateVhd(`${basePath}/clean.vhd`, {
useAlias,
mode: vhdMode,
header: {
parentUnicodeName: 'cleanAncestor.vhd' + (useAlias ? '.alias.vhd' : ''),
parentUid: cleanAncestor.footer.uuid,
},
})
await handler.writeFile(
`${basePath}/.cleanAncestor.vhd${useAlias ? '.alias.vhd' : ''}.merge.json`,
JSON.stringify({
parent: {
header: cleanAncestor.header.checksum,
},
child: {
header: clean.header.checksum,
},
})
)
// the metadata file
await handler.writeFile(
`metadata.json`,
JSON.stringify({
mode: 'delta',
vhds: [
`${basePath}/grandchild.vhd` + (useAlias ? '.alias.vhd' : ''), // grand child should not be merged
`${basePath}/child.vhd` + (useAlias ? '.alias.vhd' : ''),
`${basePath}/clean.vhd` + (useAlias ? '.alias.vhd' : ''),
],
})
)
await adapter.cleanVm('/', { remove: true, merge: true })
const metadata = JSON.parse(await handler.readFile(`metadata.json`))
// size should be the size of children + grand children + clean after the merge
expect(metadata.size).toEqual(vhdMode === 'file' ? 314880 : undefined)
// broken vhd, non referenced, abandonned should be deleted ( alias and data)
// ancestor and child should be merged
// grand child and clean vhd should not have changed
const survivors = await handler.list(basePath)
// console.log(survivors)
if (useAlias) {
// the goal of the alias : do not move a full folder
expect(survivors).toContain('ancestor.vhd.data')
expect(survivors).toContain('grandchild.vhd.data')
expect(survivors).toContain('cleanAncestor.vhd.data')
expect(survivors).toContain('clean.vhd.alias.vhd')
expect(survivors).toContain('child.vhd.alias.vhd')
expect(survivors).toContain('grandchild.vhd.alias.vhd')
expect(survivors.length).toEqual(6)
} else {
expect(survivors).toContain('clean.vhd')
expect(survivors).toContain('child.vhd')
expect(survivors).toContain('grandchild.vhd')
expect(survivors.length).toEqual(3)
}
})
}
}
})
test('it cleans orphan merge states ', async () => {
await handler.writeFile(`${basePath}/.orphan.vhd.merge.json`, '')
await adapter.cleanVm('/', { remove: true })
expect(await handler.list(basePath)).toEqual([])
})

View File

@@ -1,16 +1,38 @@
const assert = require('assert')
const sum = require('lodash/sum')
const { asyncMap } = require('@xen-orchestra/async-map')
const { default: Vhd, mergeVhd } = require('vhd-lib')
const { Constants, mergeVhd, openVhd, VhdAbstract, VhdFile } = require('vhd-lib')
const { dirname, resolve } = require('path')
const { DISK_TYPE_DIFFERENCING } = require('vhd-lib/dist/_constants.js')
const { DISK_TYPES } = Constants
const { isMetadataFile, isVhdFile, isXvaFile, isXvaSumFile } = require('./_backupType.js')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { Task } = require('./Task.js')
const { Disposable } = require('promise-toolbox')
// checking the size of a vhd directory is costly
// 1 Http Query per 1000 blocks
// we only check size of all the vhd are VhdFiles
function shouldComputeVhdsSize(vhds) {
return vhds.every(vhd => vhd instanceof VhdFile)
}
const computeVhdsSize = (handler, vhdPaths) =>
Disposable.use(
vhdPaths.map(vhdPath => openVhd(handler, vhdPath)),
async vhds => {
if (shouldComputeVhdsSize(vhds)) {
const sizes = await asyncMap(vhds, vhd => vhd.getSize())
return sum(sizes)
}
}
)
// chain is an array of VHDs from child to parent
//
// the whole chain will be merged into parent, parent will be renamed to child
// and all the others will deleted
const mergeVhdChain = limitConcurrency(1)(async function mergeVhdChain(chain, { handler, onLog, remove, merge }) {
async function mergeVhdChain(chain, { handler, onLog, remove, merge }) {
assert(chain.length >= 2)
let child = chain[0]
@@ -43,7 +65,7 @@ const mergeVhdChain = limitConcurrency(1)(async function mergeVhdChain(chain, {
}
}, 10e3)
await mergeVhd(
const mergedSize = await mergeVhd(
handler,
parent,
handler,
@@ -62,24 +84,26 @@ const mergeVhdChain = limitConcurrency(1)(async function mergeVhdChain(chain, {
clearInterval(handle)
await Promise.all([
handler.rename(parent, child),
VhdAbstract.rename(handler, parent, child),
asyncMap(children.slice(0, -1), child => {
onLog(`the VHD ${child} is unused`)
if (remove) {
onLog(`deleting unused VHD ${child}`)
return handler.unlink(child)
return VhdAbstract.unlink(handler, child)
}
}),
])
return mergedSize
}
})
}
const noop = Function.prototype
const INTERRUPTED_VHDS_REG = /^(?:(.+)\/)?\.(.+)\.merge.json$/
const INTERRUPTED_VHDS_REG = /^\.(.+)\.merge.json$/
const listVhds = async (handler, vmDir) => {
const vhds = []
const interruptedVhds = new Set()
const vhds = new Set()
const interruptedVhds = new Map()
await asyncMap(
await handler.list(`${vmDir}/vdis`, {
@@ -94,16 +118,14 @@ const listVhds = async (handler, vmDir) => {
async vdiDir => {
const list = await handler.list(vdiDir, {
filter: file => isVhdFile(file) || INTERRUPTED_VHDS_REG.test(file),
prependDir: true,
})
list.forEach(file => {
const res = INTERRUPTED_VHDS_REG.exec(file)
if (res === null) {
vhds.push(file)
vhds.add(`${vdiDir}/${file}`)
} else {
const [, dir, file] = res
interruptedVhds.add(`${dir}/${file}`)
interruptedVhds.set(`${vdiDir}/${res[1]}`, `${vdiDir}/${file}`)
}
})
}
@@ -113,65 +135,91 @@ const listVhds = async (handler, vmDir) => {
return { vhds, interruptedVhds }
}
exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop }) {
const defaultMergeLimiter = limitConcurrency(1)
exports.cleanVm = async function cleanVm(
vmDir,
{ fixMetadata, remove, merge, mergeLimiter = defaultMergeLimiter, onLog = noop }
) {
const limitedMergeVhdChain = mergeLimiter(mergeVhdChain)
const handler = this._handler
const vhds = new Set()
const vhdsToJSons = new Set()
const vhdParents = { __proto__: null }
const vhdChildren = { __proto__: null }
const vhdsList = await listVhds(handler, vmDir)
const { vhds, interruptedVhds } = await listVhds(handler, vmDir)
// remove broken VHDs
await asyncMap(vhdsList.vhds, async path => {
await asyncMap(vhds, async path => {
try {
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter(!vhdsList.interruptedVhds.has(path))
vhds.add(path)
if (vhd.footer.diskType === DISK_TYPE_DIFFERENCING) {
const parent = resolve('/', dirname(path), vhd.header.parentUnicodeName)
vhdParents[path] = parent
if (parent in vhdChildren) {
const error = new Error('this script does not support multiple VHD children')
error.parent = parent
error.child1 = vhdChildren[parent]
error.child2 = path
throw error // should we throw?
await Disposable.use(openVhd(handler, path, { checkSecondFooter: !interruptedVhds.has(path) }), vhd => {
if (vhd.footer.diskType === DISK_TYPES.DIFFERENCING) {
const parent = resolve('/', dirname(path), vhd.header.parentUnicodeName)
vhdParents[path] = parent
if (parent in vhdChildren) {
const error = new Error('this script does not support multiple VHD children')
error.parent = parent
error.child1 = vhdChildren[parent]
error.child2 = path
throw error // should we throw?
}
vhdChildren[parent] = path
}
vhdChildren[parent] = path
}
})
} catch (error) {
vhds.delete(path)
onLog(`error while checking the VHD with path ${path}`, { error })
if (error?.code === 'ERR_ASSERTION' && remove) {
onLog(`deleting broken ${path}`)
await handler.unlink(path)
return VhdAbstract.unlink(handler, path)
}
}
})
// remove interrupted merge states for missing VHDs
for (const interruptedVhd of interruptedVhds.keys()) {
if (!vhds.has(interruptedVhd)) {
const statePath = interruptedVhds.get(interruptedVhd)
interruptedVhds.delete(interruptedVhd)
onLog('orphan merge state', {
mergeStatePath: statePath,
missingVhdPath: interruptedVhd,
})
if (remove) {
onLog(`deleting orphan merge state ${statePath}`)
await handler.unlink(statePath)
}
}
}
// @todo : add check for data folder of alias not referenced in a valid alias
// remove VHDs with missing ancestors
{
const deletions = []
// return true if the VHD has been deleted or is missing
const deleteIfOrphan = vhd => {
const parent = vhdParents[vhd]
const deleteIfOrphan = vhdPath => {
const parent = vhdParents[vhdPath]
if (parent === undefined) {
return
}
// no longer needs to be checked
delete vhdParents[vhd]
delete vhdParents[vhdPath]
deleteIfOrphan(parent)
if (!vhds.has(parent)) {
vhds.delete(vhd)
vhds.delete(vhdPath)
onLog(`the parent ${parent} of the VHD ${vhd} is missing`)
onLog(`the parent ${parent} of the VHD ${vhdPath} is missing`)
if (remove) {
onLog(`deleting orphan VHD ${vhd}`)
deletions.push(handler.unlink(vhd))
onLog(`deleting orphan VHD ${vhdPath}`)
deletions.push(VhdAbstract.unlink(handler, vhdPath))
}
}
}
@@ -187,7 +235,7 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
await Promise.all(deletions)
}
const jsons = []
const jsons = new Set()
const xvas = new Set()
const xvaSums = []
const entries = await handler.list(vmDir, {
@@ -195,7 +243,7 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
})
entries.forEach(path => {
if (isMetadataFile(path)) {
jsons.push(path)
jsons.add(path)
} else if (isXvaFile(path)) {
xvas.add(path)
} else if (isXvaSumFile(path)) {
@@ -217,17 +265,25 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
// compile the list of unused XVAs and VHDs, and remove backup metadata which
// reference a missing XVA/VHD
await asyncMap(jsons, async json => {
const metadata = JSON.parse(await handler.readFile(json))
let metadata
try {
metadata = JSON.parse(await handler.readFile(json))
} catch (error) {
onLog(`failed to read metadata file ${json}`, { error })
jsons.delete(json)
return
}
const { mode } = metadata
if (mode === 'full') {
const linkedXva = resolve('/', vmDir, metadata.xva)
if (xvas.has(linkedXva)) {
unusedXvas.delete(linkedXva)
} else {
onLog(`the XVA linked to the metadata ${json} is missing`)
if (remove) {
onLog(`deleting incomplete backup ${json}`)
jsons.delete(json)
await handler.unlink(json)
}
}
@@ -237,14 +293,20 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
return Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
})()
const missingVhds = linkedVhds.filter(_ => !vhds.has(_))
// FIXME: find better approach by keeping as much of the backup as
// possible (existing disks) even if one disk is missing
if (linkedVhds.every(_ => vhds.has(_))) {
if (missingVhds.length === 0) {
linkedVhds.forEach(_ => unusedVhds.delete(_))
linkedVhds.forEach(path => {
vhdsToJSons[path] = json
})
} else {
onLog(`Some VHDs linked to the metadata ${json} are missing`)
onLog(`Some VHDs linked to the metadata ${json} are missing`, { missingVhds })
if (remove) {
onLog(`deleting incomplete backup ${json}`)
jsons.delete(json)
await handler.unlink(json)
}
}
@@ -253,6 +315,7 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
// TODO: parallelize by vm/job/vdi
const unusedVhdsDeletion = []
const toMerge = []
{
// VHD chains (as list from child to ancestor) to merge indexed by last
// ancestor
@@ -286,7 +349,7 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
onLog(`the VHD ${vhd} is unused`)
if (remove) {
onLog(`deleting unused VHD ${vhd}`)
unusedVhdsDeletion.push(handler.unlink(vhd))
unusedVhdsDeletion.push(VhdAbstract.unlink(handler, vhd))
}
}
@@ -295,22 +358,31 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
})
// merge interrupted VHDs
if (merge) {
vhdsList.interruptedVhds.forEach(parent => {
vhdChainsToMerge[parent] = [vhdChildren[parent], parent]
})
for (const parent of interruptedVhds.keys()) {
vhdChainsToMerge[parent] = [vhdChildren[parent], parent]
}
Object.keys(vhdChainsToMerge).forEach(key => {
const chain = vhdChainsToMerge[key]
Object.values(vhdChainsToMerge).forEach(chain => {
if (chain !== undefined) {
unusedVhdsDeletion.push(mergeVhdChain(chain, { handler, onLog, remove, merge }))
toMerge.push(chain)
}
})
}
const metadataWithMergedVhd = {}
const doMerge = async () => {
await asyncMap(toMerge, async chain => {
const merged = await limitedMergeVhdChain(chain, { handler, onLog, remove, merge })
if (merged !== undefined) {
const metadataPath = vhdsToJSons[chain[0]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = true
}
})
}
await Promise.all([
...unusedVhdsDeletion,
toMerge.length !== 0 && (merge ? Task.run({ name: 'merge' }, doMerge) : doMerge()),
asyncMap(unusedXvas, path => {
onLog(`the XVA ${path} is unused`)
if (remove) {
@@ -329,4 +401,55 @@ exports.cleanVm = async function cleanVm(vmDir, { remove, merge, onLog = noop })
}
}),
])
// update size for delta metadata with merged VHD
// check for the other that the size is the same as the real file size
await asyncMap(jsons, async metadataPath => {
const metadata = JSON.parse(await handler.readFile(metadataPath))
let fileSystemSize
const merged = metadataWithMergedVhd[metadataPath] !== undefined
const { mode, size, vhds, xva } = metadata
try {
if (mode === 'full') {
// a full backup : check size
const linkedXva = resolve('/', vmDir, xva)
fileSystemSize = await handler.getSize(linkedXva)
} else if (mode === 'delta') {
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
// the size is not computed in some cases (e.g. VhdDirectory)
if (fileSystemSize === undefined) {
return
}
// don't warn if the size has changed after a merge
if (!merged && fileSystemSize !== size) {
onLog(`incorrect size in metadata: ${size ?? 'none'} instead of ${fileSystemSize}`)
}
}
} catch (error) {
onLog(`failed to get size of ${metadataPath}`, { error })
return
}
// systematically update size after a merge
if ((merged || fixMetadata) && size !== fileSystemSize) {
metadata.size = fileSystemSize
try {
await handler.writeFile(metadataPath, JSON.stringify(metadata), { flags: 'w' })
} catch (error) {
onLog(`failed to update size in backup metadata ${metadataPath} after merge`, { error })
}
}
})
return {
// boolean whether some VHDs were merged (or should be merged)
merge: toMerge.length !== 0,
}
}

View File

@@ -202,6 +202,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
blocked_operations: {
...vmRecord.blocked_operations,
start: 'Importing…',
start_on: 'Importing…',
},
ha_always_run: false,
is_a_template: false,
@@ -305,9 +306,6 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}
}),
// Wait for VDI export tasks (if any) termination.
Promise.all(Object.values(streams).map(stream => stream.task)),
// Create VIFs.
asyncMap(Object.values(deltaVm.vifs), vif => {
let network = vif.$network$uuid && xapi.getObjectByUuid(vif.$network$uuid, undefined)

View File

@@ -1,11 +1,24 @@
const assert = require('assert')
const isGzipFile = async (handler, fd) => {
const COMPRESSED_MAGIC_NUMBERS = [
// https://tools.ietf.org/html/rfc1952.html#page-5
const magicNumber = Buffer.allocUnsafe(2)
Buffer.from('1F8B', 'hex'),
assert.strictEqual((await handler.read(fd, magicNumber, 0)).bytesRead, magicNumber.length)
return magicNumber[0] === 31 && magicNumber[1] === 139
// https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#zstandard-frames
Buffer.from('28B52FFD', 'hex'),
]
const MAGIC_NUMBER_MAX_LENGTH = Math.max(...COMPRESSED_MAGIC_NUMBERS.map(_ => _.length))
const isCompressedFile = async (handler, fd) => {
const header = Buffer.allocUnsafe(MAGIC_NUMBER_MAX_LENGTH)
assert.strictEqual((await handler.read(fd, header, 0)).bytesRead, header.length)
for (const magicNumber of COMPRESSED_MAGIC_NUMBERS) {
if (magicNumber.compare(header, 0, magicNumber.length) === 0) {
return true
}
}
return false
}
// TODO: better check?
@@ -43,8 +56,8 @@ async function isValidXva(path) {
return false
}
return (await isGzipFile(handler, fd))
? true // gzip files cannot be validated at this time
return (await isCompressedFile(handler, fd))
? true // compressed files cannot be validated at this time
: await isValidTar(handler, size, fd)
} finally {
handler.closeFile(fd).catch(noop)

View File

@@ -7,23 +7,25 @@ const { execFile } = require('child_process')
const parse = createParser({
keyTransform: key => key.slice(5).toLowerCase(),
})
const makeFunction = command => async (fields, ...args) => {
const info = await fromCallback(execFile, command, [
'--noheading',
'--nosuffix',
'--nameprefixes',
'--unbuffered',
'--units',
'b',
'-o',
String(fields),
...args,
])
return info
.trim()
.split(/\r?\n/)
.map(Array.isArray(fields) ? parse : line => parse(line)[fields])
}
const makeFunction =
command =>
async (fields, ...args) => {
const info = await fromCallback(execFile, command, [
'--noheading',
'--nosuffix',
'--nameprefixes',
'--unbuffered',
'--units',
'b',
'-o',
String(fields),
...args,
])
return info
.trim()
.split(/\r?\n/)
.map(Array.isArray(fields) ? parse : line => parse(line)[fields])
}
exports.lvs = makeFunction('lvs')
exports.pvs = makeFunction('pvs')

View File

@@ -0,0 +1,69 @@
#!/usr/bin/env node
const { catchGlobalErrors } = require('@xen-orchestra/log/configure.js')
const { createLogger } = require('@xen-orchestra/log')
const { getSyncedHandler } = require('@xen-orchestra/fs')
const { join } = require('path')
const Disposable = require('promise-toolbox/Disposable')
const min = require('lodash/min')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { RemoteAdapter } = require('../RemoteAdapter.js')
const { CLEAN_VM_QUEUE } = require('./index.js')
// -------------------------------------------------------------------
catchGlobalErrors(createLogger('xo:backups:mergeWorker'))
const { fatal, info, warn } = createLogger('xo:backups:mergeWorker')
// -------------------------------------------------------------------
const main = Disposable.wrap(async function* main(args) {
const handler = yield getSyncedHandler({ url: 'file://' + process.cwd() })
yield handler.lock(CLEAN_VM_QUEUE)
const adapter = new RemoteAdapter(handler)
const listRetry = async () => {
const timeoutResolver = resolve => setTimeout(resolve, 10e3)
for (let i = 0; i < 10; ++i) {
const entries = await handler.list(CLEAN_VM_QUEUE)
if (entries.length !== 0) {
return entries
}
await new Promise(timeoutResolver)
}
}
let taskFiles
while ((taskFiles = await listRetry()) !== undefined) {
const taskFileBasename = min(taskFiles)
const taskFile = join(CLEAN_VM_QUEUE, '_' + taskFileBasename)
// move this task to the end
await handler.rename(join(CLEAN_VM_QUEUE, taskFileBasename), taskFile)
try {
const vmDir = getVmBackupDir(String(await handler.readFile(taskFile)))
await adapter.cleanVm(vmDir, { merge: true, onLog: info, remove: true })
handler.unlink(taskFile).catch(error => warn('deleting task failure', { error }))
} catch (error) {
warn('failure handling task', { error })
}
}
})
info('starting')
main(process.argv.slice(2)).then(
() => {
info('bye :-)')
},
error => {
fatal(error)
process.exit(1)
}
)

View File

@@ -0,0 +1,25 @@
const { join, resolve } = require('path')
const { spawn } = require('child_process')
const { check } = require('proper-lockfile')
const CLEAN_VM_QUEUE = (exports.CLEAN_VM_QUEUE = '/xo-vm-backups/.queue/clean-vm/')
const CLI_PATH = resolve(__dirname, 'cli.js')
exports.run = async function runMergeWorker(remotePath) {
try {
// TODO: find a way to pass the acquire the lock and then pass it down the worker
if (await check(join(remotePath, CLEAN_VM_QUEUE))) {
// already locked, don't start another worker
return
}
spawn(CLI_PATH, {
cwd: remotePath,
detached: true,
stdio: 'inherit',
}).unref()
} catch (error) {
// we usually don't want to throw if the merge worker failed to start
return error
}
}

View File

@@ -8,7 +8,7 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "0.11.0",
"version": "0.18.3",
"engines": {
"node": ">=14.6"
},
@@ -16,29 +16,31 @@
"postversion": "npm publish --access public"
},
"dependencies": {
"@vates/compose": "^2.0.0",
"@vates/compose": "^2.1.0",
"@vates/disposable": "^0.1.1",
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^0.17.0",
"@xen-orchestra/log": "^0.2.0",
"@xen-orchestra/fs": "^0.19.3",
"@xen-orchestra/log": "^0.3.0",
"@xen-orchestra/template": "^0.1.0",
"compare-versions": "^3.6.0",
"compare-versions": "^4.0.1",
"d3-time-format": "^3.0.0",
"end-of-stream": "^1.4.4",
"fs-extra": "^9.0.0",
"fs-extra": "^10.0.0",
"golike-defer": "^0.5.1",
"limit-concurrency-decorator": "^0.5.0",
"lodash": "^4.17.20",
"node-zone": "^0.4.0",
"parse-pairs": "^1.1.0",
"promise-toolbox": "^0.20.0",
"proper-lockfile": "^4.1.2",
"pump": "^3.0.0",
"promise-toolbox": "^0.19.2",
"vhd-lib": "^1.0.0",
"uuid": "^8.3.2",
"vhd-lib": "^3.0.0",
"yazl": "^2.5.1"
},
"peerDependencies": {
"@xen-orchestra/xapi": "^0.6.2"
"@xen-orchestra/xapi": "^0.8.5"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -0,0 +1,92 @@
// a valid footer of a 2
exports.VHDFOOTER = {
cookie: 'conectix',
features: 2,
fileFormatVersion: 65536,
dataOffset: 512,
timestamp: 0,
creatorApplication: 'caml',
creatorVersion: 1,
creatorHostOs: 0,
originalSize: 53687091200,
currentSize: 53687091200,
diskGeometry: { cylinders: 25700, heads: 16, sectorsPerTrackCylinder: 255 },
diskType: 3,
checksum: 4294962945,
uuid: Buffer.from('d8dbcad85265421e8b298d99c2eec551', 'utf-8'),
saved: '',
hidden: '',
reserved: '',
}
exports.VHDHEADER = {
cookie: 'cxsparse',
dataOffset: undefined,
tableOffset: 2048,
headerVersion: 65536,
maxTableEntries: 25600,
blockSize: 2097152,
checksum: 4294964241,
parentUuid: null,
parentTimestamp: 0,
reserved1: 0,
parentUnicodeName: '',
parentLocatorEntry: [
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
{
platformCode: 0,
platformDataSpace: 0,
platformDataLength: 0,
reserved: 0,
platformDataOffset: 0,
},
],
reserved2: '',
}

View File

@@ -3,7 +3,7 @@ const map = require('lodash/map.js')
const mapValues = require('lodash/mapValues.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { chainVhd, checkVhdChain, default: Vhd } = require('vhd-lib')
const { chainVhd, checkVhdChain, openVhd, VhdAbstract } = require('vhd-lib')
const { createLogger } = require('@xen-orchestra/log')
const { dirname } = require('path')
@@ -16,6 +16,7 @@ const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { checkVhd } = require('./_checkVhd.js')
const { packUuid } = require('./_packUuid.js')
const { Disposable } = require('promise-toolbox')
const { warn } = createLogger('xo:backups:DeltaBackupWriter')
@@ -23,6 +24,7 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
async checkBaseVdis(baseUuidToSrcVdi) {
const { handler } = this._adapter
const backup = this._backup
const adapter = this._adapter
const backupDir = getVmBackupDir(backup.vm.uuid)
const vdisDir = `${backupDir}/vdis/${backup.job.id}`
@@ -34,16 +36,21 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
filter: _ => _[0] !== '.' && _.endsWith('.vhd'),
prependDir: true,
})
const packedBaseUuid = packUuid(baseUuid)
await asyncMap(vhds, async path => {
try {
await checkVhdChain(handler, path)
// Warning, this should not be written as found = found || await adapter.isMergeableParent(packedBaseUuid, path)
//
// since all the checks of a path are done in parallel, found would be containing
// only the last answer of isMergeableParent which is probably not the right one
// this led to the support tickets https://help.vates.fr/#ticket/zoom/4751 , 4729, 4665 and 4300
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
found = found || vhd.footer.uuid.equals(packUuid(baseUuid))
const isMergeable = await adapter.isMergeableParent(packedBaseUuid, path)
found = found || isMergeable
} catch (error) {
warn('checkBaseVdis', { error })
await ignoreErrors.call(handler.unlink(path))
await ignoreErrors.call(VhdAbstract.unlink(handler, path))
}
})
} catch (error) {
@@ -113,22 +120,16 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
}
async _deleteOldEntries() {
return Task.run({ name: 'merge' }, async () => {
const adapter = this._adapter
const oldEntries = this._oldEntries
const adapter = this._adapter
const oldEntries = this._oldEntries
let size = 0
// delete sequentially from newest to oldest to avoid unnecessary merges
for (let i = oldEntries.length; i-- > 0; ) {
size += await adapter.deleteDeltaVmBackups([oldEntries[i]])
}
return {
size,
}
})
// delete sequentially from newest to oldest to avoid unnecessary merges
for (let i = oldEntries.length; i-- > 0; ) {
await adapter.deleteDeltaVmBackups([oldEntries[i]])
}
}
async transfer({ timestamp, deltaExport, sizeContainers }) {
async _transfer({ timestamp, deltaExport, sizeContainers }) {
const adapter = this._adapter
const backup = this._backup
@@ -150,7 +151,7 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
// don't do delta for it
vdi.uuid
: vdi.$snapshot_of$uuid
}/${basename}.vhd`
}/${adapter.getVhdFileName(basename)}`
)
const metadataFilename = `${backupDir}/${basename}.json`
@@ -194,7 +195,7 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
await checkVhd(handler, parentPath)
}
await adapter.outputStream(path, deltaExport.streams[`${id}.vhd`], {
await adapter.writeVhd(path, deltaExport.streams[`${id}.vhd`], {
// no checksum for VHDs, because they will be invalidated by
// merges and chainings
checksum: false,
@@ -206,11 +207,11 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
}
// set the correct UUID in the VHD
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
vhd.footer.uuid = packUuid(vdi.uuid)
await vhd.readBlockAllocationTable() // required by writeFooter()
await vhd.writeFooter()
await Disposable.use(openVhd(handler, path), async vhd => {
vhd.footer.uuid = packUuid(vdi.uuid)
await vhd.readBlockAllocationTable() // required by writeFooter()
await vhd.writeFooter()
})
})
)
return {

View File

@@ -77,7 +77,7 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
return asyncMapSettled(this._oldEntries, vm => vm.$destroy())
}
async transfer({ timestamp, deltaExport, sizeContainers }) {
async _transfer({ timestamp, deltaExport, sizeContainers }) {
const sr = this._sr
const { job, scheduleId, vm } = this._backup
@@ -106,9 +106,11 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
targetVm.ha_restart_priority !== '' &&
Promise.all([targetVm.set_ha_restart_priority(''), targetVm.add_tags('HA disabled')]),
targetVm.set_name_label(`${vm.name_label} - ${job.name} - (${formatFilenameDate(timestamp)})`),
targetVm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
asyncMap(['start', 'start_on'], op =>
targetVm.update_blocked_operations(
op,
'Start operation for this vm is blocked, clone it if you want to use it.'
)
),
targetVm.update_other_config({
'xo:backup:sr': srUuid,

View File

@@ -25,7 +25,7 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
)
}
async run({ timestamp, sizeContainer, stream }) {
async _run({ timestamp, sizeContainer, stream }) {
const backup = this._backup
const settings = this._settings

View File

@@ -1,5 +1,5 @@
const ignoreErrors = require('promise-toolbox/ignoreErrors.js')
const { asyncMapSettled } = require('@xen-orchestra/async-map')
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { formatFilenameDate } = require('../_filenameDate.js')
@@ -29,7 +29,7 @@ exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplica
)
}
async run({ timestamp, sizeContainer, stream }) {
async _run({ timestamp, sizeContainer, stream }) {
const sr = this._sr
const settings = this._settings
const { job, scheduleId, vm } = this._backup
@@ -64,9 +64,11 @@ exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplica
const targetVm = await xapi.getRecord('VM', targetVmRef)
await Promise.all([
targetVm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
asyncMap(['start', 'start_on'], op =>
targetVm.update_blocked_operations(
op,
'Start operation for this vm is blocked, clone it if you want to use it.'
)
),
targetVm.update_other_config({
'xo:backup:sr': srUuid,

View File

@@ -13,7 +13,14 @@ exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
throw new Error('Not implemented')
}
transfer({ timestamp, deltaExport, sizeContainers }) {
throw new Error('Not implemented')
async transfer({ timestamp, deltaExport, sizeContainers }) {
try {
return await this._transfer({ timestamp, deltaExport, sizeContainers })
} finally {
// ensure all streams are properly closed
for (const stream of Object.values(deltaExport.streams)) {
stream.destroy()
}
}
}
}

View File

@@ -1,7 +1,12 @@
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractFullWriter = class AbstractFullWriter extends AbstractWriter {
run({ timestamp, sizeContainer, stream }) {
throw new Error('Not implemented')
async run({ timestamp, sizeContainer, stream }) {
try {
return await this._run({ timestamp, sizeContainer, stream })
} finally {
// ensure stream is properly closed
stream.destroy()
}
}
}

View File

@@ -1,34 +1,65 @@
const { createLogger } = require('@xen-orchestra/log')
const { join } = require('path')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { BACKUP_DIR, getVmBackupDir } = require('../_getVmBackupDir.js')
const MergeWorker = require('../merge-worker/index.js')
const { formatFilenameDate } = require('../_filenameDate.js')
const { warn } = createLogger('xo:backups:MixinBackupWriter')
exports.MixinBackupWriter = (BaseClass = Object) =>
class MixinBackupWriter extends BaseClass {
#lock
#vmBackupDir
constructor({ remoteId, ...rest }) {
super(rest)
this._adapter = rest.backup.remoteAdapters[remoteId]
this._remoteId = remoteId
this._lock = undefined
this.#vmBackupDir = getVmBackupDir(this._backup.vm.uuid)
}
_cleanVm(options) {
return this._adapter
.cleanVm(getVmBackupDir(this._backup.vm.uuid), { ...options, onLog: warn, lock: false })
.catch(warn)
async _cleanVm(options) {
try {
return await this._adapter.cleanVm(this.#vmBackupDir, {
...options,
fixMetadata: true,
onLog: warn,
lock: false,
})
} catch (error) {
warn(error)
return {}
}
}
async beforeBackup() {
const { handler } = this._adapter
const vmBackupDir = getVmBackupDir(this._backup.vm.uuid)
const vmBackupDir = this.#vmBackupDir
await handler.mktree(vmBackupDir)
this._lock = await handler.lock(vmBackupDir)
this.#lock = await handler.lock(vmBackupDir)
}
async afterBackup() {
await this._cleanVm({ remove: true, merge: true })
await this._lock.dispose()
const { disableMergeWorker } = this._backup.config
const { merge } = await this._cleanVm({ remove: true, merge: disableMergeWorker })
await this.#lock.dispose()
// merge worker only compatible with local remotes
const { handler } = this._adapter
if (merge && !disableMergeWorker && typeof handler._getRealPath === 'function') {
const taskFile =
join(MergeWorker.CLEAN_VM_QUEUE, formatFilenameDate(new Date())) +
'-' +
// add a random suffix to avoid collision in case multiple tasks are created at the same second
Math.random().toString(36).slice(2)
await handler.outputFile(taskFile, this._backup.vm.uuid)
const remotePath = handler._getRealPath()
await MergeWorker.run(remotePath)
}
}
}

View File

@@ -1,5 +1,6 @@
const Vhd = require('vhd-lib').default
const openVhd = require('vhd-lib').openVhd
const Disposable = require('promise-toolbox/Disposable')
exports.checkVhd = async function checkVhd(handler, path) {
await new Vhd(handler, path).readHeaderAndFooter()
await Disposable.use(openVhd(handler, path), () => {})
}

View File

@@ -77,7 +77,11 @@ ${cliName} v${pkg.version}
'xo:backup:sr': tgtSr.uuid,
'xo:copy_of': srcSnapshotUuid,
}),
tgtVm.update_blocked_operations('start', 'Start operation for this vm is blocked, clone it if you want to use it.'),
Promise.all(
['start', 'start_on'].map(op =>
tgtVm.update_blocked_operations(op, 'Start operation for this vm is blocked, clone it if you want to use it.')
)
),
Promise.all(
userDevices.map(userDevice => {
const srcDisk = srcDisks[userDevice]

View File

@@ -18,7 +18,7 @@
"preferGlobal": true,
"dependencies": {
"golike-defer": "^0.5.1",
"xen-api": "^0.32.0"
"xen-api": "^0.35.1"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -2,6 +2,8 @@
import { createSchedule } from './'
jest.useFakeTimers()
const wrap = value => () => value
describe('issues', () => {

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/defined",
"version": "0.0.0",
"version": "0.0.1",
"license": "ISC",
"description": "Utilities to help handling (possibly) undefined values",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/defined",

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/emit-async",
"version": "0.0.0",
"version": "0.1.0",
"license": "ISC",
"description": "Emit an event for async listeners to settle",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/emit-async",

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/fs",
"version": "0.17.0",
"version": "0.19.3",
"license": "AGPL-3.0-or-later",
"description": "The File System for Xen Orchestra backups.",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",
@@ -17,23 +17,23 @@
"node": ">=14"
},
"dependencies": {
"@marsaud/smb2": "^0.17.2",
"@marsaud/smb2": "^0.18.0",
"@sindresorhus/df": "^3.1.1",
"@sullux/aws-sdk": "^1.0.5",
"@vates/coalesce-calls": "^0.1.0",
"@xen-orchestra/async-map": "^0.1.2",
"aws-sdk": "^2.686.0",
"decorator-synchronized": "^0.5.0",
"decorator-synchronized": "^0.6.0",
"execa": "^5.0.0",
"fs-extra": "^9.0.0",
"fs-extra": "^10.0.0",
"get-stream": "^6.0.0",
"limit-concurrency-decorator": "^0.5.0",
"lodash": "^4.17.4",
"promise-toolbox": "^0.19.2",
"promise-toolbox": "^0.20.0",
"proper-lockfile": "^4.1.2",
"readable-stream": "^3.0.6",
"through2": "^4.0.2",
"xo-remote-parser": "^0.7.0"
"xo-remote-parser": "^0.8.0"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
@@ -45,7 +45,7 @@
"async-iterator-to-stream": "^1.1.0",
"babel-plugin-lodash": "^3.3.2",
"cross-env": "^7.0.2",
"dotenv": "^8.0.0",
"dotenv": "^10.0.0",
"rimraf": "^3.0.0"
},
"scripts": {

View File

@@ -1,13 +1,13 @@
import asyncMapSettled from '@xen-orchestra/async-map/legacy'
import getStream from 'get-stream'
import path, { basename } from 'path'
import synchronized from 'decorator-synchronized'
import { coalesceCalls } from '@vates/coalesce-calls'
import { fromCallback, fromEvent, ignoreErrors, timeout } from 'promise-toolbox'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { parse } from 'xo-remote-parser'
import { pipeline } from 'stream'
import { randomBytes } from 'crypto'
import { synchronized } from 'decorator-synchronized'
import normalizePath from './_normalizePath'
import { createChecksumStream, validChecksumOfReadStream } from './checksum'
@@ -76,6 +76,7 @@ export default class RemoteHandlerAbstract {
const sharedLimit = limitConcurrency(options.maxParallelOperations ?? DEFAULT_MAX_PARALLEL_OPERATIONS)
this.closeFile = sharedLimit(this.closeFile)
this.copy = sharedLimit(this.copy)
this.getInfo = sharedLimit(this.getInfo)
this.getSize = sharedLimit(this.getSize)
this.list = sharedLimit(this.list)
@@ -307,6 +308,17 @@ export default class RemoteHandlerAbstract {
return p
}
async copy(oldPath, newPath, { checksum = false } = {}) {
oldPath = normalizePath(oldPath)
newPath = normalizePath(newPath)
let p = timeout.call(this._copy(oldPath, newPath), this._timeout)
if (checksum) {
p = Promise.all([p, this._copy(checksumFile(oldPath), checksumFile(newPath))])
}
return p
}
async rmdir(dir) {
await timeout.call(this._rmdir(normalizePath(dir)).catch(ignoreEnoent), this._timeout)
}
@@ -519,6 +531,9 @@ export default class RemoteHandlerAbstract {
async _rename(oldPath, newPath) {
throw new Error('Not implemented')
}
async _copy(oldPath, newPath) {
throw new Error('Not implemented')
}
async _rmdir(dir) {
throw new Error('Not implemented')

View File

@@ -27,3 +27,12 @@ export const getHandler = (remote, ...rest) => {
}
return new Handler(remote, ...rest)
}
export const getSyncedHandler = async (...opts) => {
const handler = getHandler(...opts)
await handler.sync()
return {
dispose: () => handler.forget(),
value: handler,
}
}

View File

@@ -33,6 +33,10 @@ export default class LocalHandler extends RemoteHandlerAbstract {
return fs.close(fd)
}
async _copy(oldPath, newPath) {
return fs.copy(this._getFilePath(oldPath), this._getFilePath(newPath))
}
async _createReadStream(file, options) {
if (typeof file === 'string') {
const stream = fs.createReadStream(this._getFilePath(file), options)

View File

@@ -1,6 +1,7 @@
import aws from '@sullux/aws-sdk'
import assert from 'assert'
import http from 'http'
import https from 'https'
import { parse } from 'xo-remote-parser'
import RemoteHandlerAbstract from './abstract'
@@ -16,7 +17,7 @@ const IDEAL_FRAGMENT_SIZE = Math.ceil(MAX_OBJECT_SIZE / MAX_PARTS_COUNT) // the
export default class S3Handler extends RemoteHandlerAbstract {
constructor(remote, _opts) {
super(remote)
const { host, path, username, password, protocol, region } = parse(remote.url)
const { allowUnauthorized, host, path, username, password, protocol, region } = parse(remote.url)
const params = {
accessKeyId: username,
apiVersion: '2006-03-01',
@@ -29,8 +30,13 @@ export default class S3Handler extends RemoteHandlerAbstract {
},
}
if (protocol === 'http') {
params.httpOptions.agent = new http.Agent()
params.httpOptions.agent = new http.Agent({ keepAlive: true })
params.sslEnabled = false
} else if (protocol === 'https') {
params.httpOptions.agent = new https.Agent({
rejectUnauthorized: !allowUnauthorized,
keepAlive: true,
})
}
if (region !== undefined) {
params.region = region
@@ -51,6 +57,27 @@ export default class S3Handler extends RemoteHandlerAbstract {
return { Bucket: this._bucket, Key: this._dir + file }
}
async _copy(oldPath, newPath) {
const size = await this._getSize(oldPath)
const multipartParams = await this._s3.createMultipartUpload({ ...this._createParams(newPath) })
const param2 = { ...multipartParams, CopySource: `/${this._bucket}/${this._dir}${oldPath}` }
try {
const parts = []
let start = 0
while (start < size) {
const range = `bytes=${start}-${Math.min(start + MAX_PART_SIZE, size) - 1}`
const partParams = { ...param2, PartNumber: parts.length + 1, CopySourceRange: range }
const upload = await this._s3.uploadPartCopy(partParams)
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partParams.PartNumber })
start += MAX_PART_SIZE
}
await this._s3.completeMultipartUpload({ ...multipartParams, MultipartUpload: { Parts: parts } })
} catch (e) {
await this._s3.abortMultipartUpload(multipartParams)
throw e
}
}
async _isNotEmptyDir(path) {
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
@@ -125,16 +152,30 @@ export default class S3Handler extends RemoteHandlerAbstract {
const splitPrefix = splitPath(prefix)
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
Prefix: splitPrefix.join('/'),
Prefix: splitPrefix.join('/') + '/', // need slash at the end with the use of delimiters
Delimiter: '/', // will only return path until delimiters
})
const uniq = new Set()
if (result.isTruncated) {
const error = new Error('more than 1000 objects, unsupported in this implementation')
error.dir = dir
throw error
}
const uniq = []
// sub directories
for (const entry of result.CommonPrefixes) {
const line = splitPath(entry.Prefix)
uniq.push(line[line.length - 1])
}
// files
for (const entry of result.Contents) {
const line = splitPath(entry.Key)
if (line.length > splitPrefix.length) {
uniq.add(line[splitPrefix.length])
}
uniq.push(line[line.length - 1])
}
return [...uniq]
return uniq
}
async _mkdir(path) {
@@ -147,25 +188,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
// nothing to do, directories do not exist, they are part of the files' path
}
// s3 doesn't have a rename operation, so copy + delete source
async _rename(oldPath, newPath) {
const size = await this._getSize(oldPath)
const multipartParams = await this._s3.createMultipartUpload({ ...this._createParams(newPath) })
const param2 = { ...multipartParams, CopySource: `/${this._bucket}/${this._dir}${oldPath}` }
try {
const parts = []
let start = 0
while (start < size) {
const range = `bytes=${start}-${Math.min(start + MAX_PART_SIZE, size) - 1}`
const partParams = { ...param2, PartNumber: parts.length + 1, CopySourceRange: range }
const upload = await this._s3.uploadPartCopy(partParams)
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partParams.PartNumber })
start += MAX_PART_SIZE
}
await this._s3.completeMultipartUpload({ ...multipartParams, MultipartUpload: { Parts: parts } })
} catch (e) {
await this._s3.abortMultipartUpload(multipartParams)
throw e
}
await this.copy(oldPath, newPath)
await this._s3.deleteObject(this._createParams(oldPath))
}
@@ -183,9 +208,21 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
const params = this._createParams(file)
params.Range = `bytes=${position}-${position + buffer.length - 1}`
const result = await this._s3.getObject(params)
result.Body.copy(buffer)
return { bytesRead: result.Body.length, buffer }
try {
const result = await this._s3.getObject(params)
result.Body.copy(buffer)
return { bytesRead: result.Body.length, buffer }
} catch (e) {
if (e.code === 'NoSuchKey') {
if (await this._isNotEmptyDir(file)) {
const error = new Error(`${file} is a directory`)
error.code = 'EISDIR'
error.path = file
throw error
}
}
throw e
}
}
async _rmdir(path) {
@@ -199,6 +236,28 @@ export default class S3Handler extends RemoteHandlerAbstract {
// nothing to do, directories do not exist, they are part of the files' path
}
// reimplement _rmtree to handle efficiantly path with more than 1000 entries in trees
// @todo : use parallel processing for unlink
async _rmtree(path) {
let NextContinuationToken
do {
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
Prefix: this._dir + path + '/',
ContinuationToken: NextContinuationToken,
})
NextContinuationToken = result.isTruncated ? result.NextContinuationToken : undefined
for (const { Key } of result.Contents) {
// _unlink will add the prefix, but Key contains everything
// also we don't need to check if we delete a directory, since the list only return files
await this._s3.deleteObject({
Bucket: this._bucket,
Key,
})
}
} while (NextContinuationToken !== undefined)
}
async _write(file, buffer, position) {
if (typeof file !== 'string') {
file = file.fd

View File

@@ -66,6 +66,10 @@ configure([
// if filter is a string, then it is pattern
// (https://github.com/visionmedia/debug#wildcards) which is
// matched against the namespace of the logs
//
// If it's an array, it will be handled as an array of filters
// and the transport will be used if any one of them match the
// current log
filter: process.env.DEBUG,
transport: transportConsole(),

View File

@@ -48,6 +48,10 @@ configure([
// if filter is a string, then it is pattern
// (https://github.com/visionmedia/debug#wildcards) which is
// matched against the namespace of the logs
//
// If it's an array, it will be handled as an array of filters
// and the transport will be used if any one of them match the
// current log
filter: process.env.DEBUG,
transport: transportConsole(),

View File

@@ -4,6 +4,42 @@ const { compileGlobPattern } = require('./utils')
// ===================================================================
const compileFilter = filter => {
if (filter === undefined) {
return
}
const type = typeof filter
if (type === 'function') {
return filter
}
if (type === 'string') {
const re = compileGlobPattern(filter)
return log => re.test(log.namespace)
}
if (Array.isArray(filter)) {
const filters = filter.map(compileFilter).filter(_ => _ !== undefined)
const { length } = filters
if (length === 0) {
return
}
if (length === 1) {
return filters[0]
}
return log => {
for (let i = 0; i < length; ++i) {
if (filters[i](log)) {
return true
}
}
return false
}
}
throw new TypeError('unsupported `filter`')
}
const createTransport = config => {
if (typeof config === 'function') {
return config
@@ -19,26 +55,15 @@ const createTransport = config => {
}
}
let { filter } = config
let transport = createTransport(config.transport)
const level = resolve(config.level)
const filter = compileFilter([config.filter, level === undefined ? undefined : log => log.level >= level])
let transport = createTransport(config.transport)
if (filter !== undefined) {
if (typeof filter === 'string') {
const re = compileGlobPattern(filter)
filter = log => re.test(log.namespace)
}
const orig = transport
transport = function (log) {
if ((level !== undefined && log.level >= level) || filter(log)) {
return orig.apply(this, arguments)
}
}
} else if (level !== undefined) {
const orig = transport
transport = function (log) {
if (log.level >= level) {
if (filter(log)) {
return orig.apply(this, arguments)
}
}

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/log",
"version": "0.2.0",
"version": "0.3.0",
"license": "ISC",
"description": "Logging system with decoupled producers/consumer",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/log",
@@ -24,7 +24,7 @@
},
"dependencies": {
"lodash": "^4.17.4",
"promise-toolbox": "^0.19.2"
"promise-toolbox": "^0.20.0"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -20,36 +20,8 @@ if (process.stdout !== undefined && process.stdout.isTTY && process.stderr !== u
}
const NAMESPACE_COLORS = [
196,
202,
208,
214,
220,
226,
190,
154,
118,
82,
46,
47,
48,
49,
50,
51,
45,
39,
33,
27,
21,
57,
93,
129,
165,
201,
200,
199,
198,
197,
196, 202, 208, 214, 220, 226, 190, 154, 118, 82, 46, 47, 48, 49, 50, 51, 45, 39, 33, 27, 21, 57, 93, 129, 165, 201,
200, 199, 198, 197,
]
formatNamespace = namespace => {
// https://werxltd.com/wp/2010/05/13/javascript-implementation-of-javas-string-hashcode-method/

View File

@@ -19,7 +19,7 @@
"node": ">=6"
},
"dependencies": {
"bind-property-descriptor": "^1.0.0",
"bind-property-descriptor": "^2.0.0",
"lodash": "^4.17.21"
},
"scripts": {

View File

@@ -1,5 +1,6 @@
const get = require('lodash/get')
const identity = require('lodash/identity')
const isEqual = require('lodash/isEqual')
const { createLogger } = require('@xen-orchestra/log')
const { parseDuration } = require('@vates/parse-duration')
const { watch } = require('app-conf')
@@ -12,7 +13,7 @@ module.exports = class Config {
const watchers = (this._watchers = new Set())
app.hooks.on('start', async () => {
app.hooks.on(
app.hooks.once(
'stop',
await watch({ appDir, appName, ignoreUnknownFormats: true }, (error, config) => {
if (error != null) {
@@ -31,7 +32,7 @@ module.exports = class Config {
get(path) {
const value = get(this._config, path)
if (value === undefined) {
throw new TypeError('missing config entry: ' + value)
throw new TypeError('missing config entry: ' + path)
}
return value
}
@@ -48,7 +49,7 @@ module.exports = class Config {
const watcher = config => {
try {
const value = processor(get(config, path))
if (value !== prev) {
if (!isEqual(value, prev)) {
prev = value
cb(value)
}

View File

@@ -14,15 +14,15 @@
"url": "https://vates.fr"
},
"license": "AGPL-3.0-or-later",
"version": "0.1.0",
"version": "0.1.2",
"engines": {
"node": ">=12"
},
"dependencies": {
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/emit-async": "^0.0.0",
"@xen-orchestra/log": "^0.2.0",
"app-conf": "^0.9.0",
"@xen-orchestra/emit-async": "^0.1.0",
"@xen-orchestra/log": "^0.3.0",
"app-conf": "^1.0.0",
"lodash": "^4.17.21"
},
"scripts": {

View File

@@ -28,9 +28,10 @@ export default {
buffer.toString('hex', offset + 5, offset + 6),
stringToEth: (string, buffer, offset) => {
const eth = /^([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2})$/.exec(
string
)
const eth =
/^([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2}):([0-9A-Fa-f]{2})$/.exec(
string
)
assert(eth !== null)
buffer.writeUInt8(parseInt(eth[1], 16), offset)
buffer.writeUInt8(parseInt(eth[2], 16), offset + 1)
@@ -50,9 +51,10 @@ export default {
),
stringToip4: (string, buffer, offset) => {
const ip = /^([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])$/.exec(
string
)
const ip =
/^([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])\.([1-9]?\d|1\d\d|2[0-4]\d|25[0-5])$/.exec(
string
)
assert(ip !== null)
buffer.writeUInt8(parseInt(ip[1], 10), offset)
buffer.writeUInt8(parseInt(ip[2], 10), offset + 1)

View File

@@ -23,22 +23,22 @@
"xo-proxy-cli": "dist/index.js"
},
"engines": {
"node": ">=8.10"
"node": ">=12"
},
"dependencies": {
"@iarna/toml": "^2.2.0",
"@vates/read-chunk": "^0.1.2",
"ansi-colors": "^4.1.1",
"app-conf": "^0.9.0",
"app-conf": "^1.0.0",
"content-type": "^1.0.4",
"cson-parser": "^4.0.7",
"getopts": "^2.2.3",
"http-request-plus": "^0.10.0",
"http-request-plus": "^0.13.0",
"json-rpc-protocol": "^0.13.1",
"promise-toolbox": "^0.19.2",
"promise-toolbox": "^0.20.0",
"pump": "^3.0.0",
"pumpify": "^2.0.1",
"split2": "^3.1.1"
"split2": "^4.1.0"
},
"devDependencies": {
"@babel/cli": "^7.0.0",

View File

@@ -36,7 +36,14 @@ async function main(argv) {
const { hostname = 'localhost', port } = config?.http?.listen?.https ?? {}
const { _: args, file, help, host, raw, token } = getopts(argv, {
const {
_: args,
file,
help,
host,
raw,
token,
} = getopts(argv, {
alias: { file: 'f', help: 'h' },
boolean: ['help', 'raw'],
default: {
@@ -140,16 +147,6 @@ ${pkg.name} v${pkg.version}`
}
}
const $import = ({ $import: path }) => {
const data = fs.readFileSync(path, 'utf8')
const ext = extname(path).slice(1).toLowerCase()
const parse = FORMATS[ext]
if (parse === undefined) {
throw new Error(`unsupported file: ${path}`)
}
return visit(parse(data))
}
const seq = async seq => {
const j = callPath.length
for (let i = 0, n = seq.length; i < n; ++i) {
@@ -163,13 +160,17 @@ ${pkg.name} v${pkg.version}`
if (Array.isArray(node)) {
return seq(node)
}
const keys = Object.keys(node)
return keys.length === 1 && keys[0] === '$import' ? $import(node) : call(node)
return call(node)
}
let node
if (file !== '') {
node = { $import: file }
const data = fs.readFileSync(file, 'utf8')
const ext = extname(file).slice(1).toLowerCase()
const parse = FORMATS[ext]
if (parse === undefined) {
throw new Error(`unsupported file: ${file}`)
}
await visit(parse(data))
} else {
const method = args[0]
const params = {}
@@ -182,9 +183,8 @@ ${pkg.name} v${pkg.version}`
params[param.slice(0, j)] = parseValue(param.slice(j + 1))
}
node = { method, params }
await call({ method, params })
}
await visit(node)
}
main(process.argv.slice(2)).then(
() => {

View File

@@ -18,7 +18,9 @@ keepAliveInterval = 10e3
#
# https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation
dirMode = 0o700
disableMergeWorker = false
snapshotNameLabelTpl = '[XO Backup {job.name}] {vm.name_label}'
vhdDirectoryCompression = 'brotli'
[backups.defaultSettings]
reportWhen = 'failure'
@@ -59,6 +61,13 @@ cert = '/var/lib/xo-proxy/certificate.pem'
key = '/var/lib/xo-proxy/key.pem'
port = 443
[logs]
# Display all logs matching this filter, regardless of their level
#filter = 'xo:backups:*'
# Display all logs with level >=, regardless of their namespace
level = 'info'
[remoteOptions]
mountsDir = '/run/xo-proxy/mounts'
@@ -79,3 +88,20 @@ ignoreNobakVdis = false
maxUncoalescedVdis = 1
watchEvents = ['network', 'PIF', 'pool', 'SR', 'task', 'VBD', 'VDI', 'VIF', 'VM']
#compact mode
[reverseProxies]
# '/http/' = 'http://localhost:8081/'
#The target can have a path ( like `http://target/sub/directory/`),
# parameters (`?param=one`) and hash (`#jwt:32154`) that are automatically added to all queries transfered by the proxy.
# If a parameter is present in the configuration and in the query, only the config parameter is transferred.
# '/another' = http://hiddenServer:8765/path/
# And use the extended mode when required
# The additionnal options of a proxy's configuraiton's section are used to instantiate the `https` Agent(respectively the `http`).
# A notable option is `rejectUnauthorized` which allow to connect to a HTTPS backend with an invalid/ self signed certificate
#[reverseProxies.'/https/']
# target = 'https://localhost:8080/'
# rejectUnauthorized = false

View File

@@ -93,10 +93,7 @@ declare namespace event {
declare namespace backup {
type SimpleIdPattern = { id: string | { __or: string[] } }
declare namespace backup {
type SimpleIdPattern = { id: string | { __or: string[] } }
interface BackupJob {
interface BackupJob {
id: string
type: 'backup'
compression?: 'native' | 'zstd' | ''
@@ -146,13 +143,13 @@ declare namespace backup {
}
function listXoMetadataBackups(_: { remotes: { [id: string]: Remote } }): { [remoteId: string]: object[] }
function run(_: {
job: BackupJob | MetadataBackupJob
function run(_: {
job: BackupJob | MetadataBackupJob
remotes: { [id: string]: Remote }
schedule: Schedule
xapis?: { [id: string]: Xapi }
recordToXapi?: { [recordUuid: string]: string }
schedule: Schedule
xapis?: { [id: string]: Xapi }
recordToXapi?: { [recordUuid: string]: string }
streamLogs: boolean = false
}): string

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "@xen-orchestra/proxy",
"version": "0.13.1",
"version": "0.17.3",
"license": "AGPL-3.0-or-later",
"description": "XO Proxy used to remotely execute backup jobs",
"keywords": [
@@ -18,48 +18,48 @@
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"preferGlobal": true,
"main": "dist/",
"bin": {
"xo-proxy": "dist/index.js"
"xo-proxy": "dist/index.mjs"
},
"engines": {
"node": ">=14.13"
"node": ">=14.18"
},
"dependencies": {
"@iarna/toml": "^2.2.0",
"@koa/router": "^10.0.0",
"@vates/compose": "^2.0.0",
"@vates/decorate-with": "^0.0.1",
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^1.0.0",
"@vates/disposable": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.11.0",
"@xen-orchestra/fs": "^0.17.0",
"@xen-orchestra/log": "^0.2.0",
"@xen-orchestra/backups": "^0.18.3",
"@xen-orchestra/fs": "^0.19.3",
"@xen-orchestra/log": "^0.3.0",
"@xen-orchestra/mixin": "^0.1.0",
"@xen-orchestra/mixins": "^0.1.0",
"@xen-orchestra/mixins": "^0.1.2",
"@xen-orchestra/self-signed": "^0.1.0",
"@xen-orchestra/xapi": "^0.6.2",
"@xen-orchestra/xapi": "^0.8.5",
"ajv": "^8.0.3",
"app-conf": "^0.9.0",
"app-conf": "^1.0.0",
"async-iterator-to-stream": "^1.1.0",
"fs-extra": "^9.1.0",
"fs-extra": "^10.0.0",
"get-stream": "^6.0.0",
"getopts": "^2.2.3",
"golike-defer": "^0.5.1",
"http-server-plus": "^0.11.0",
"http2-proxy": "^5.0.53",
"json-rpc-protocol": "^0.13.1",
"jsonrpc-websocket-client": "^0.6.0",
"jsonrpc-websocket-client": "^0.7.2",
"koa": "^2.5.1",
"koa-compress": "^5.0.1",
"koa-helmet": "^5.1.0",
"lodash": "^4.17.10",
"node-zone": "^0.4.0",
"parse-pairs": "^1.0.0",
"promise-toolbox": "^0.19.2",
"promise-toolbox": "^0.20.0",
"source-map-support": "^0.5.16",
"stoppable": "^1.0.6",
"xdg-basedir": "^4.0.0",
"xen-api": "^0.32.0",
"xdg-basedir": "^5.1.0",
"xen-api": "^0.35.1",
"xo-common": "^0.7.0"
},
"devDependencies": {
@@ -73,7 +73,7 @@
"@vates/toggle-scripts": "^1.0.0",
"babel-plugin-transform-dev": "^2.0.1",
"cross-env": "^7.0.2",
"index-modules": "^0.4.0"
"index-modules": "^0.4.3"
},
"scripts": {
"_build": "index-modules --index-file index.mjs src/app/mixins && babel --delete-dir-on-start --keep-file-extension --source-maps --out-dir=dist/ src/",
@@ -84,7 +84,7 @@
"prepack": "toggle-scripts +postinstall +preuninstall",
"prepublishOnly": "yarn run build",
"_preuninstall": "./scripts/systemd-service-installer",
"start": "./dist/index.js"
"start": "./dist/index.mjs"
},
"author": {
"name": "Vates SAS",

View File

@@ -15,13 +15,29 @@ import { createLogger } from '@xen-orchestra/log'
const { debug, warn } = createLogger('xo:proxy:api')
const ndJsonStream = asyncIteratorToStream(async function* (responseId, iterable) {
yield format.response(responseId, { $responseType: 'ndjson' }) + '\n'
for await (const data of iterable) {
try {
let cursor, iterator
try {
yield JSON.stringify(data) + '\n'
const getIterator = iterable[Symbol.iterator] ?? iterable[Symbol.asyncIterator]
iterator = getIterator.call(iterable)
cursor = await iterator.next()
yield format.response(responseId, { $responseType: 'ndjson' }) + '\n'
} catch (error) {
warn('ndJsonStream', { error })
yield format.error(responseId, error)
throw error
}
while (!cursor.done) {
try {
yield JSON.stringify(cursor.value) + '\n'
} catch (error) {
warn('ndJsonStream, item error', { error })
}
cursor = await iterator.next()
}
} catch (error) {
warn('ndJsonStream, fatal error', { error })
}
})
@@ -29,8 +45,8 @@ export default class Api {
constructor(app, { appVersion, httpServer }) {
this._ajv = new Ajv({ allErrors: true })
this._methods = { __proto__: null }
const router = new Router({ prefix: '/api/v1' }).post('/', async ctx => {
const PREFIX = '/api/v1'
const router = new Router({ prefix: PREFIX }).post('/', async ctx => {
// Before Node 13.0 there was an inactivity timeout of 2 mins, which may
// not be enough for the API.
ctx.req.setTimeout(0)
@@ -86,6 +102,7 @@ export default class Api {
// breaks, send some data every 10s to keep it opened.
const stopTimer = clearInterval.bind(
undefined,
// @to check : can this add space inside binary data ?
setInterval(() => stream.push(' '), keepAliveInterval)
)
stream.on('end', stopTimer).on('error', stopTimer)
@@ -102,7 +119,14 @@ export default class Api {
.use(router.routes())
.use(router.allowedMethods())
httpServer.on('request', koa.callback())
const callback = koa.callback()
httpServer.on('request', (req, res) => {
// only answers to query to the root url of this mixin
// do it before giving the request to Koa to ensure it's not modified
if (req.url.startsWith(PREFIX)) {
callback(req, res)
}
})
this.addMethods({
system: {

View File

@@ -1,5 +1,6 @@
import assert from 'assert'
import fse from 'fs-extra'
import xdg from 'xdg-basedir'
import { xdgConfig } from 'xdg-basedir'
import { createLogger } from '@xen-orchestra/log'
import { execFileSync } from 'child_process'
@@ -10,33 +11,48 @@ const { warn } = createLogger('xo:proxy:authentication')
const isValidToken = t => typeof t === 'string' && t.length !== 0
export default class Authentication {
constructor(_, { appName, config: { authenticationToken: token } }) {
if (!isValidToken(token)) {
token = JSON.parse(execFileSync('xenstore-read', ['vm-data/xo-proxy-authenticationToken']))
#token
if (!isValidToken(token)) {
throw new Error('missing authenticationToken in configuration')
}
constructor(app, { appName, config: { authenticationToken: token } }) {
const setToken = ({ token }) => {
assert(isValidToken(token), 'invalid authentication token: ' + token)
// save this token in the automatically handled conf file
fse.outputFileSync(
// this file must take precedence over normal user config
`${xdgConfig}/${appName}/config.z-auto.json`,
JSON.stringify({ authenticationToken: token }),
{ mode: 0o600 }
)
this.#token = token
}
if (isValidToken(token)) {
this.#token = token
} else {
setToken({ token: JSON.parse(execFileSync('xenstore-read', ['vm-data/xo-proxy-authenticationToken'])) })
try {
// save this token in the automatically handled conf file
fse.outputFileSync(
// this file must take precedence over normal user config
`${xdg.config}/${appName}/config.z-auto.json`,
JSON.stringify({ authenticationToken: token }),
{ mode: 0o600 }
)
execFileSync('xenstore-rm', ['vm-data/xo-proxy-authenticationToken'])
} catch (error) {
warn('failed to remove token from XenStore', { error })
}
}
this._token = token
app.api.addMethod('authentication.setToken', setToken, {
description: 'change the authentication token used by this XO Proxy',
params: {
token: {
type: 'string',
minLength: 1,
},
},
})
}
async findProfile(credentials) {
if (credentials?.authenticationToken === this._token) {
if (credentials?.authenticationToken === this.#token) {
return new Profile()
}
}

View File

@@ -1,5 +1,3 @@
import Cancel from 'promise-toolbox/Cancel'
import CancelToken from 'promise-toolbox/CancelToken'
import Disposable from 'promise-toolbox/Disposable.js'
import fromCallback from 'promise-toolbox/fromCallback.js'
import { asyncMap } from '@xen-orchestra/async-map'
@@ -13,6 +11,7 @@ import { DurablePartition } from '@xen-orchestra/backups/DurablePartition.js'
import { execFile } from 'child_process'
import { formatVmBackups } from '@xen-orchestra/backups/formatVmBackups.js'
import { ImportVmBackup } from '@xen-orchestra/backups/ImportVmBackup.js'
import { JsonRpcError } from 'json-rpc-protocol'
import { Readable } from 'stream'
import { RemoteAdapter } from '@xen-orchestra/backups/RemoteAdapter.js'
import { RestoreMetadataBackup } from '@xen-orchestra/backups/RestoreMetadataBackup.js'
@@ -97,8 +96,7 @@ export default class Backups {
error.jobId = jobId
throw error
}
const source = CancelToken.source()
runningJobs[jobId] = source.cancel
runningJobs[jobId] = true
try {
return await run.apply(this, arguments)
} finally {
@@ -111,7 +109,7 @@ export default class Backups {
if (!__DEV__) {
const license = await app.appliance.getSelfLicense()
if (license === undefined) {
throw new Error('no valid proxy license')
throw new JsonRpcError('no valid proxy license')
}
}
return run.apply(this, arguments)
@@ -166,6 +164,17 @@ export default class Backups {
},
},
],
deleteVmBackups: [
({ filenames, remote }) =>
Disposable.use(this.getAdapter(remote), adapter => adapter.deleteVmBackups(filenames)),
{
description: 'delete VM backups',
params: {
filenames: { type: 'array', items: { type: 'string' } },
remote: { type: 'object' },
},
},
],
fetchPartitionFiles: [
({ disk: diskId, remote, partition: partitionId, paths }) =>
Disposable.use(this.getAdapter(remote), adapter => adapter.fetchPartitionFiles(diskId, partitionId, paths)),
@@ -405,6 +414,7 @@ export default class Backups {
return new RemoteAdapter(yield app.remotes.getHandler(remote), {
debounceResource: app.debounceResource.bind(app),
dirMode: app.config.get('backups.dirMode'),
vhdDirectoryCompression: app.config.get('backups.vhdDirectoryCompression'),
})
}

View File

@@ -0,0 +1,17 @@
import transportConsole from '@xen-orchestra/log/transports/console.js'
import { configure } from '@xen-orchestra/log/configure.js'
export default class Logs {
constructor(app) {
const transport = transportConsole()
app.config.watch('logs', ({ filter, level }) => {
configure([
{
filter: [process.env.DEBUG, filter],
level,
transport,
},
])
})
}
}

View File

@@ -0,0 +1,120 @@
import { urlToHttpOptions } from 'url'
import proxy from 'http2-proxy'
function removeSlash(str) {
return str.replace(/^\/|\/$/g, '')
}
function mergeUrl(relative, base) {
const res = new URL(base)
const relativeUrl = new URL(relative, base)
res.pathname = relativeUrl.pathname
relativeUrl.searchParams.forEach((value, name) => {
// we do not allow to modify params already specified by config
if (!res.searchParams.has(name)) {
res.searchParams.append(name, value)
}
})
res.hash = relativeUrl.hash.length > 0 ? relativeUrl.hash : res.hash
return res
}
export function backendToLocalPath(basePath, target, backendUrl) {
// keep redirect url relative to local server
const localPath = `${basePath}/${backendUrl.pathname.substring(target.pathname.length)}${backendUrl.search}${
backendUrl.hash
}`
return localPath
}
export function localToBackendUrl(basePath, target, localPath) {
let localPathWithoutBase = removeSlash(localPath).substring(basePath.length)
localPathWithoutBase = './' + removeSlash(localPathWithoutBase)
const url = mergeUrl(localPathWithoutBase, target)
return url
}
export default class ReverseProxy {
constructor(app, { httpServer }) {
app.config.watch('reverseProxies', proxies => {
this._proxies = Object.keys(proxies)
.sort((a, b) => b.length - a.length)
.map(path => {
let config = proxies[path]
if (typeof config === 'string') {
config = { target: config }
}
config.path = '/proxy/v1/' + removeSlash(path) + '/'
return config
})
})
httpServer.on('request', (req, res) => this._proxy(req, res))
httpServer.on('upgrade', (req, socket, head) => this._upgrade(req, socket, head))
}
_getConfigFromRequest(req) {
return this._proxies.find(({ path }) => req.url.startsWith(path))
}
_proxy(req, res) {
const config = this._getConfigFromRequest(req)
if (config === undefined) {
res.writeHead(404)
res.end('404')
return
}
const url = new URL(config.target)
const targetUrl = localToBackendUrl(config.path, url, req.originalUrl || req.url)
proxy.web(req, res, {
...urlToHttpOptions(targetUrl),
...config.options,
onReq: (req, { headers }) => {
headers['x-forwarded-for'] = req.socket.remoteAddress
headers['x-forwarded-proto'] = req.socket.encrypted ? 'https' : 'http'
if (req.headers.host !== undefined) {
headers['x-forwarded-host'] = req.headers.host
}
},
onRes: (req, res, proxyRes) => {
// rewrite redirect to pass through this proxy
if (proxyRes.statusCode === 301 || proxyRes.statusCode === 302) {
// handle relative/ absolute location
const redirectTargetLocation = new URL(proxyRes.headers.location, url)
// this proxy should only allow communication between known hosts. Don't open it too much
if (redirectTargetLocation.hostname !== url.hostname || redirectTargetLocation.protocol !== url.protocol) {
throw new Error(`Can't redirect from ${url.hostname} to ${redirectTargetLocation.hostname} `)
}
res.writeHead(proxyRes.statusCode, {
...proxyRes.headers,
location: backendToLocalPath(config.path, url, redirectTargetLocation),
})
res.end()
return
}
// pass through the answer of the remote server
res.writeHead(proxyRes.statusCode, proxyRes.headers)
// pass through content
proxyRes.pipe(res)
},
})
}
_upgrade(req, socket, head) {
const config = this._getConfigFromRequest(req)
if (config === undefined) {
return
}
const { path, target, options } = config
const targetUrl = localToBackendUrl(path, target, req.originalUrl || req.url)
proxy.ws(req, socket, head, {
...urlToHttpOptions(targetUrl),
...options,
})
}
}

View File

@@ -1,38 +0,0 @@
import { asyncMapSettled } from '@xen-orchestra/async-map'
export default class Task {
#tasks = new Map()
constructor(app) {
const tasks = new Map()
this.#tasks = tasks
app.api.addMethods({
task: {
*list() {
for (const id of tasks.keys()) {
yield { id }
}
},
cancel: [
({ taskId }) => this.cancel(taskId),
{
params: {
taskId: { type: 'string' },
},
},
],
},
})
app.hooks.on('stop', () => asyncMapSettled(tasks.values(), task => task.cancel()))
}
async cancel(taskId) {
await this.tasks.get(taskId).cancel()
}
register(task) {
this.#tasks.set(task.id, task)
}
}

View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDETCCAfkCFHXO1U7YJHI61bPNhYDvyBNJYH4LMA0GCSqGSIb3DQEBCwUAMEUx
CzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRl
cm5ldCBXaWRnaXRzIFB0eSBMdGQwHhcNMjIwMTEwMTI0MTU4WhcNNDkwNTI3MTI0
MTU4WjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UE
CgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA1jMLdHuZu2R1fETyB2iRect1alwv76clp/7A8tx4zNaVA9qB
BcHbI83mkozuyrXpsEUblTvvcWkheBPAvWD4gj0eWSDSiuf0edcIS6aky+Lr/n0T
W/vL5kVNrgPTlsO8OyQcXjDeuUOR1xDWIa8G71Ynd6wtATB7oXe7kaV/Z6b2fENr
4wlW0YEDnMHik59c9jXDshhQYDlErwZsSyHuLwkC7xuYO26SUW9fPcHJA3uOfxeG
BrCxMuSMOJtdmslRWhLCjbk0PT12OYCCRlvuTvPHa8N57GEQbi4xAu+XATgO1DUm
Dq/oCSj0TcWUXXOykN/PAC2cjIyqkU2e7orGaQIDAQABMA0GCSqGSIb3DQEBCwUA
A4IBAQCTshhF3V5WVhnpFGHd+tPfeHmUVrUnbC+xW7fSeWpamNmTjHb7XB6uDR0O
DGswhEitbbSOsCiwz4/zpfE3/3+X07O8NPbdHTVHCei6D0uyegEeWQ2HoocfZs3X
8CORe8TItuvQAevV17D0WkGRoJGVAOiKo+izpjI55QXQ+FjkJ0bfl1iksnUJk0+I
ZNmRRNjNyOxo7NAzomSBHfJ5rDE+E440F2uvXIE9OIwHRiq6FGvQmvGijPeeP5J0
LzcSK98jfINFSsA/Wn5vWE+gfH9ySD2G3r2cDTS904T77PNiYH+cNSP6ujtmNzvK
Bgoa3jXZPRBi82TUOb2jj5DB33bg
-----END CERTIFICATE-----

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA1jMLdHuZu2R1fETyB2iRect1alwv76clp/7A8tx4zNaVA9qB
BcHbI83mkozuyrXpsEUblTvvcWkheBPAvWD4gj0eWSDSiuf0edcIS6aky+Lr/n0T
W/vL5kVNrgPTlsO8OyQcXjDeuUOR1xDWIa8G71Ynd6wtATB7oXe7kaV/Z6b2fENr
4wlW0YEDnMHik59c9jXDshhQYDlErwZsSyHuLwkC7xuYO26SUW9fPcHJA3uOfxeG
BrCxMuSMOJtdmslRWhLCjbk0PT12OYCCRlvuTvPHa8N57GEQbi4xAu+XATgO1DUm
Dq/oCSj0TcWUXXOykN/PAC2cjIyqkU2e7orGaQIDAQABAoIBAQC65uVq6WLWGa1O
FtbdUggGL1svyGrngYChGvB/uZMKoX57U1DbljDCCCrV23WNmbfkYBjWWervmZ1j
qlC2roOJGQ1/Fd3A6O7w1YnegPUxFrt3XunijE55iiVi3uHknryDGlpKcfgVzfjW
oVFHKPMzKYjcqnbGn+hwlwoq5y7JYFTOa57/dZbyommbodRyy9Dpn0OES0grQqwR
VD1amrQ7XJhukcxQgYPuDc/jM3CuowoBsv9f+Q2zsPgr6CpHxxLLIs+kt8NQJT9v
neg/pm8ojcwOa9qoILdtu6ue7ee3VE9cFnB1vutxS1+MPeI5wgTJjaYrgPCMxXBM
2LdJJEmBAoGBAPA6LpuU1vv5R3x66hzenSk4LS1fj24K0WuBdTwFvzQmCr70oKdo
Yywxt+ZkBw5aEtzQlB8GewolHobDJrzxMorU+qEXX3jP2BIPDVQl2orfjr03Yyus
s5mYS/Qa6Zf1yObrjulTNm8oTn1WaG3TIvi8c5DyG2OK28N/9oMI1XGRAoGBAORD
YKyII/S66gZsJSf45qmrhq1hHuVt1xae5LUPP6lVD+MCCAmuoJnReV8fc9h7Dvgd
YPMINkWUTePFr3o4p1mh2ZC7ldczgDn6X4TldY2J3Zg47xJa5hL0L6JL4NiCGRIE
FV5rLJxkGh/DDBfmC9hQQ6Yg6cHvyewso5xVnBtZAoGAI+OdWPMIl0ZrrqYyWbPM
aP8SiMfRBtCo7tW9bQUyxpi0XEjxw3Dt+AlJfysMftFoJgMnTedK9H4NLHb1T579
PQ6KjwyN39+1WSVUiXDKUJsLmSswLrMzdcvx9PscUO6QYCdrB2K+LCcqasFBAr9b
ZyvIXCw/eUSihneUnYjxUnECgYAoPgCzKiU8ph9QFozOaUExNH4/3tl1lVHQOR8V
FKUik06DtP35xwGlXJrLPF5OEhPnhjZrYk0/IxBAUb/ICmjmknQq4gdes0Ot9QgW
A+Yfl+irR45ObBwXx1kGgd4YDYeh93pU9QweXj+Ezfw50mLQNgZXKYJMoJu2uX/2
tdkZsQKBgQCTfDcW8qBntI6V+3Gh+sIThz+fjdv5+qT54heO4EHadc98ykEZX0M1
sCWJiAQWM/zWXcsTndQDgDsvo23jpoulVPDitSEISp5gSe9FEN2njsVVID9h1OIM
f30s5kwcJoiV9kUCya/BFtuS7kbuQfAyPU0v3I+lUey6VCW6A83OTg==
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,60 @@
import { createServer as creatServerHttps } from 'https'
import { createServer as creatServerHttp } from 'http'
import { WebSocketServer } from 'ws'
import fs from 'fs'
const httpsServer = creatServerHttps({
key: fs.readFileSync('key.pem'),
cert: fs.readFileSync('cert.pem'),
})
const httpServer = creatServerHttp()
const wss = new WebSocketServer({ noServer: true, perMessageDeflate: false })
function upgrade(request, socket, head) {
const { pathname } = new URL(request.url)
// web socket server only on /foo url
if (pathname === '/foo') {
wss.handleUpgrade(request, socket, head, function done(ws) {
wss.emit('connection', ws, request)
ws.on('message', function message(data) {
ws.send(data)
})
})
} else {
socket.destroy()
}
}
function httpHandler(req, res) {
switch (req.url) {
case '/index.html':
res.end('hi')
return
case '/redirect':
res.writeHead(301, {
Location: 'index.html',
})
res.end()
return
case '/chainRedirect':
res.writeHead(301, {
Location: '/redirect',
})
res.end()
return
default:
res.writeHead(404)
res.end()
}
}
httpsServer.on('upgrade', upgrade)
httpServer.on('upgrade', upgrade)
httpsServer.on('request', httpHandler)
httpServer.on('request', httpHandler)
httpsServer.listen(8080)
httpServer.listen(8081)

View File

@@ -0,0 +1,123 @@
import ReverseProxy, { backendToLocalPath, localToBackendUrl } from '../dist/app/mixins/reverseProxy.mjs'
import { deepEqual, strictEqual } from 'assert'
function makeApp(reverseProxies) {
return {
config: {
get: () => reverseProxies,
},
}
}
const app = makeApp({
https: {
target: 'https://localhost:8080/remotePath/?baseParm=1#one=2&another=3',
oneOption: true,
},
http: 'http://localhost:8080/remotePath/?baseParm=1#one=2&another=3',
})
// test localToBackendUrl
const expectedLocalToRemote = {
https: [
{
local: '/proxy/v1/https/',
remote: 'https://localhost:8080/remotePath/?baseParm=1#one=2&another=3',
},
{
local: '/proxy/v1/https/sub',
remote: 'https://localhost:8080/remotePath/sub?baseParm=1#one=2&another=3',
},
{
local: '/proxy/v1/https/sub/index.html',
remote: 'https://localhost:8080/remotePath/sub/index.html?baseParm=1#one=2&another=3',
},
{
local: '/proxy/v1/https/sub?param=1',
remote: 'https://localhost:8080/remotePath/sub?baseParm=1&param=1#one=2&another=3',
},
{
local: '/proxy/v1/https/sub?baseParm=willbeoverwritten&param=willstay',
remote: 'https://localhost:8080/remotePath/sub?baseParm=1&param=willstay#one=2&another=3',
},
{
local: '/proxy/v1/https/sub?param=1#another=willoverwrite',
remote: 'https://localhost:8080/remotePath/sub?baseParm=1&param=1#another=willoverwrite',
},
],
}
const proxy = new ReverseProxy(app, { httpServer: { on: () => {} } })
for (const proxyId in expectedLocalToRemote) {
for (const { local, remote } of expectedLocalToRemote[proxyId]) {
const config = proxy._getConfigFromRequest({ url: local })
const url = new URL(config.target)
strictEqual(localToBackendUrl(config.path, url, local).href, remote, 'error converting to backend')
}
}
// test backendToLocalPath
const expectedRemoteToLocal = {
https: [
{
local: '/proxy/v1/https/',
remote: 'https://localhost:8080/remotePath/',
},
{
local: '/proxy/v1/https/sub/index.html',
remote: '/remotePath/sub/index.html',
},
{
local: '/proxy/v1/https/?baseParm=1#one=2&another=3',
remote: '?baseParm=1#one=2&another=3',
},
{
local: '/proxy/v1/https/sub?baseParm=1#one=2&another=3',
remote: 'https://localhost:8080/remotePath/sub?baseParm=1#one=2&another=3',
},
],
}
for (const proxyId in expectedRemoteToLocal) {
for (const { local, remote } of expectedRemoteToLocal[proxyId]) {
const config = proxy._getConfigFromRequest({ url: local })
const targetUrl = new URL('https://localhost:8080/remotePath/?baseParm=1#one=2&another=3')
const remoteUrl = new URL(remote, targetUrl)
strictEqual(backendToLocalPath(config.path, targetUrl, remoteUrl), local, 'error converting to local')
}
}
// test _getConfigFromRequest
const expectedConfig = [
{
local: '/proxy/v1/http/other',
config: {
target: 'http://localhost:8080/remotePath/?baseParm=1#one=2&another=3',
options: {},
path: '/proxy/v1/http',
},
},
{
local: '/proxy/v1/http',
config: undefined,
},
{
local: '/proxy/v1/other',
config: undefined,
},
{
local: '/proxy/v1/https/',
config: {
target: 'https://localhost:8080/remotePath/?baseParm=1#one=2&another=3',
options: {
oneOption: true,
},
path: '/proxy/v1/https',
},
},
]
for (const { local, config } of expectedConfig) {
deepEqual(proxy._getConfigFromRequest({ url: local }), config)
}

View File

@@ -27,27 +27,25 @@
"xo-upload-ova": "dist/index.js"
},
"engines": {
"node": ">=8.10"
"node": ">=10"
},
"dependencies": {
"chalk": "^4.1.0",
"exec-promise": "^0.7.0",
"form-data": "^4.0.0",
"fs-extra": "^9.0.0",
"fs-promise": "^2.0.3",
"fs-extra": "^10.0.0",
"get-stream": "^6.0.0",
"http-request-plus": "^0.10.0",
"http-request-plus": "^0.13.0",
"human-format": "^0.11.0",
"l33teral": "^3.0.3",
"lodash": "^4.17.4",
"nice-pipe": "0.0.0",
"pretty-ms": "^7.0.0",
"progress-stream": "^2.0.0",
"pw": "^0.0.4",
"strip-indent": "^3.0.0",
"xdg-basedir": "^4.0.0",
"xo-lib": "^0.10.1",
"xo-vmdk-to-vhd": "^2.0.0"
"xo-lib": "^0.11.1",
"xo-vmdk-to-vhd": "^2.0.3"
},
"devDependencies": {
"@babel/cli": "^7.0.0",

View File

@@ -6,7 +6,7 @@ import chalk from 'chalk'
import execPromise from 'exec-promise'
import FormData from 'form-data'
import { createReadStream } from 'fs'
import { stat } from 'fs-promise'
import { stat } from 'fs-extra'
import getStream from 'get-stream'
import hrp from 'http-request-plus'
import humanFormat from 'human-format'
@@ -14,7 +14,6 @@ import l33t from 'l33teral'
import isObject from 'lodash/isObject'
import getKeys from 'lodash/keys'
import startsWith from 'lodash/startsWith'
import nicePipe from 'nice-pipe'
import prettyMs from 'pretty-ms'
import progressStream from 'progress-stream'
import pw from 'pw'
@@ -22,10 +21,13 @@ import stripIndent from 'strip-indent'
import { URL } from 'url'
import Xo from 'xo-lib'
import { parseOVAFile } from 'xo-vmdk-to-vhd'
import { pipeline } from 'stream'
import pkg from '../package'
import { load as loadConfig, set as setConfig, unset as unsetConfig } from './config'
const noop = Function.prototype
function help() {
return stripIndent(
`
@@ -206,7 +208,7 @@ export async function upload(args) {
url = new URL(result[key], baseUrl)
const { size: length } = await stat(file)
const input = nicePipe([
const input = pipeline(
createReadStream(file),
progressStream(
{
@@ -215,7 +217,8 @@ export async function upload(args) {
},
printProgress
),
])
noop
)
formData.append('file', input, { filename: 'file', knownLength: length })
try {
return await hrp.post(url.toString(), { body: formData, headers: formData.getHeaders() }).readAll('utf-8')

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/xapi",
"version": "0.6.2",
"version": "0.8.5",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/xapi",
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
@@ -25,7 +25,7 @@
"xo-common": "^0.7.0"
},
"peerDependencies": {
"xen-api": "^0.32.0"
"xen-api": "^0.35.1"
},
"scripts": {
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
@@ -38,13 +38,13 @@
"prepublishOnly": "yarn run build"
},
"dependencies": {
"@vates/decorate-with": "^0.0.1",
"@vates/decorate-with": "^1.0.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.2.0",
"@xen-orchestra/log": "^0.3.0",
"d3-time-format": "^3.0.0",
"golike-defer": "^0.5.1",
"lodash": "^4.17.15",
"promise-toolbox": "^0.19.2"
"promise-toolbox": "^0.20.0"
},
"private": false,
"license": "AGPL-3.0-or-later",

View File

@@ -15,16 +15,18 @@ exports.VDI_FORMAT_VHD = 'vhd'
// xapi.call('host.get_servertime', host.$ref) for example
exports.formatDateTime = utcFormat('%Y%m%dT%H:%M:%SZ')
const parseDateTimeHelper = utcParse('%Y%m%dT%H:%M:%SZ')
const dateTimeParsers = ['%Y%m%dT%H:%M:%SZ', '%Y-%m-%dT%H:%M:%S.%LZ'].map(utcParse)
exports.parseDateTime = function (str, defaultValue) {
const date = parseDateTimeHelper(str)
if (date === null) {
if (arguments.length > 1) {
return defaultValue
for (const parser of dateTimeParsers) {
const date = parser(str)
if (date !== null) {
return date.getTime()
}
throw new RangeError(`unable to parse XAPI datetime ${JSON.stringify(str)}`)
}
return date.getTime()
if (arguments.length > 1) {
return defaultValue
}
throw new RangeError(`unable to parse XAPI datetime ${JSON.stringify(str)}`)
}
const hasProps = o => {
@@ -98,18 +100,16 @@ function removeWatcher(predicate, cb) {
class Xapi extends Base {
constructor({
callRetryWhenTooManyPendingTasks,
callRetryWhenTooManyPendingTasks = { delay: 5e3, tries: 10 },
ignoreNobakVdis,
maxUncoalescedVdis,
vdiDestroyRetryWhenInUse,
vdiDestroyRetryWhenInUse = { delay: 5e3, tries: 10 },
...opts
}) {
assert.notStrictEqual(ignoreNobakVdis, undefined)
super(opts)
this._callRetryWhenTooManyPendingTasks = {
delay: 5e3,
tries: 10,
...callRetryWhenTooManyPendingTasks,
onRetry,
when: { code: 'TOO_MANY_PENDING_TASKS' },
@@ -117,8 +117,6 @@ class Xapi extends Base {
this._ignoreNobakVdis = ignoreNobakVdis
this._maxUncoalescedVdis = maxUncoalescedVdis
this._vdiDestroyRetryWhenInUse = {
delay: 5e3,
retries: 10,
...vdiDestroyRetryWhenInUse,
onRetry,
when: { code: 'VDI_IN_USE' },

View File

@@ -0,0 +1,17 @@
/* eslint-env jest */
const { parseDateTime } = require('./')
describe('parseDateTime()', () => {
it('parses legacy XAPI format', () => {
expect(parseDateTime('20220106T21:25:15Z')).toBe(1641504315000)
})
it('parses ISO 8601 with millisconds format', () => {
expect(parseDateTime('2022-01-06T17:32:35.000Z')).toBe(1641490355000)
})
it('throws when value cannot be parsed', () => {
expect(() => parseDateTime('foo bar')).toThrow('unable to parse XAPI datetime "foo bar"')
})
})

View File

@@ -1,6 +1,7 @@
const CancelToken = require('promise-toolbox/CancelToken.js')
const pCatch = require('promise-toolbox/catch.js')
const pRetry = require('promise-toolbox/retry.js')
const { decorateWith } = require('@vates/decorate-with')
const extractOpaqueRef = require('./_extractOpaqueRef.js')
@@ -11,10 +12,13 @@ module.exports = class Vdi {
return extractOpaqueRef(await this.callAsync('VDI.clone', vdiRef))
}
// work around a race condition in XCP-ng/XenServer where the disk is not fully unmounted yet
@decorateWith(pRetry.wrap, function () {
return this._vdiDestroyRetryWhenInUse
})
async destroy(vdiRef) {
await pCatch.call(
// work around a race condition in XCP-ng/XenServer where the disk is not fully unmounted yet
pRetry(() => this.callAsync('VDI.destroy', vdiRef), this._vdiDestroyRetryWhenInUse),
this.callAsync('VDI.destroy', vdiRef),
// if this VDI is not found, consider it destroyed
{ code: 'HANDLE_INVALID' },
noop

View File

@@ -60,7 +60,7 @@ module.exports = class Vm {
try {
vdi = await this[vdiRefOrUuid.startsWith('OpaqueRef:') ? 'getRecord' : 'getRecordByUuid']('VDI', vdiRefOrUuid)
} catch (error) {
warn(error)
warn('_assertHealthyVdiChain, could not fetch VDI', { error })
return
}
cache[vdi.$ref] = vdi
@@ -81,7 +81,7 @@ module.exports = class Vm {
try {
vdi = await this.getRecord('VDI', vdiRef)
} catch (error) {
warn(error)
warn('_assertHealthyVdiChain, could not fetch VDI', { error })
return
}
cache[vdiRef] = vdi
@@ -99,6 +99,7 @@ module.exports = class Vm {
// should coalesce
const children = childrenMap[vdi.uuid]
if (
children !== undefined && // unused unmanaged VDI, will be GC-ed
children.length === 1 &&
!children[0].managed && // some SRs do not coalesce the leaf
tolerance-- <= 0
@@ -166,7 +167,7 @@ module.exports = class Vm {
memory_static_min,
name_description,
name_label,
// NVRAM, // experimental
NVRAM,
order,
other_config = {},
PCI_bus = '',
@@ -255,6 +256,7 @@ module.exports = class Vm {
is_vmss_snapshot,
name_description,
name_label,
NVRAM,
order,
reference_label,
shutdown_delay,

View File

@@ -1,15 +1,364 @@
# ChangeLog
## **next**
## **5.66.2** (2022-01-05)
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Bug fixes
- [Import/Disk] Fix `JSON.parse` and `createReadableSparseStream is not a function` errors [#6068](https://github.com/vatesfr/xen-orchestra/issues/6068)
- [Backup] Fix delta backup are almost always full backup instead of differentials [Forum#5256](https://xcp-ng.org/forum/topic/5256/s3-backup-try-it/69) [Forum#5371](https://xcp-ng.org/forum/topic/5371/delta-backup-changes-in-5-66) (PR [#6075](https://github.com/vatesfr/xen-orchestra/pull/6075))
### Released packages
- xen-api 0.32
- vhd-lib 3.0.0
- xo-vmdk-to-vhd 2.0.3
- @xen-orchestra/backups 0.18.3
- @xen-orchestra/proxy 0.17.3
- xo-server 5.86.3
- xo-web 5.91.2
## **5.66.1** (2021-12-23)
### Bug fixes
- [Dashboard/Health] Fix `error has occured` when a pool has no default SR
- [Delta Backup] Fix unnecessary full backup when not using S3 [Forum #5371](https://xcp-ng.org/forum/topic/5371/delta-backup-changes-in-5-66)d (PR [#6070](https://github.com/vatesfr/xen-orchestra/pull/6070))
- [Backup] Fix incorrect warnings `incorrect size [...] instead of undefined`
### Released packages
- @xen-orchestra/backups 0.18.2
- @xen-orchestra/proxy 0.17.2
- xo-server 5.86.2
- xo-web 5.91.1
## **5.66.0** (2021-12-21)
### Enhancements
- [About] Show commit instead of version numbers for source users (PR [#6045](https://github.com/vatesfr/xen-orchestra/pull/6045))
- [Health] Display default SRs that aren't shared [#5871](https://github.com/vatesfr/xen-orchestra/issues/5871) (PR [#6033](https://github.com/vatesfr/xen-orchestra/pull/6033))
- [Pool,VM/advanced] Ability to change the suspend SR [#4163](https://github.com/vatesfr/xen-orchestra/issues/4163) (PR [#6044](https://github.com/vatesfr/xen-orchestra/pull/6044))
- [Home/VMs/Backup filter] Filter out VMs in disabled backup jobs (PR [#6037](https://github.com/vatesfr/xen-orchestra/pull/6037))
- [Rolling Pool Update] Automatically disable High Availability during the update [#5711](https://github.com/vatesfr/xen-orchestra/issues/5711) (PR [#6057](https://github.com/vatesfr/xen-orchestra/pull/6057))
- [Delta Backup on S3] Compress blocks by default ([Brotli](https://en.wikipedia.org/wiki/Brotli)) which reduces remote usage and increase backup speed (PR [#5932](https://github.com/vatesfr/xen-orchestra/pull/5932))
### Bug fixes
- [Tables/actions] Fix collapsed actions being clickable despite being disabled (PR [#6023](https://github.com/vatesfr/xen-orchestra/pull/6023))
- [Backup] Remove incorrect size warning following a merge [Forum #5727](https://xcp-ng.org/forum/topic/4769/warnings-showing-in-system-logs-following-each-backup-job/4) (PR [#6010](https://github.com/vatesfr/xen-orchestra/pull/6010))
- [Delta Backup] Preserve UEFI boot parameters [#6054](https://github.com/vatesfr/xen-orchestra/issues/6054) [Forum #5319](https://xcp-ng.org/forum/topic/5319/bug-uefi-boot-parameters-not-preserved-with-delta-backups)
### Released packages
- @xen-orchestra/mixins 0.1.2
- @xen-orchestra/xapi 0.8.5
- vhd-lib 2.1.0
- xo-vmdk-to-vhd 2.0.2
- @xen-orchestra/backups 0.18.1
- @xen-orchestra/proxy 0.17.1
- xo-server 5.86.1
- xo-web 5.91.0
## **5.65.3** (2021-12-20)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
### Bug fixes
- [Continuous Replication] Fix `could not find the base VM`
- [Backup/Smart mode] Always ignore replicated VMs created by the current job
- [Backup] Fix `Unexpected end of JSON input` during merge step
- [Backup] Fix stuck jobs when using S3 remotes (PR [#6067](https://github.com/vatesfr/xen-orchestra/pull/6067))
### Released packages
- @xen-orchestra/fs 0.19.3
- vhd-lib 2.0.4
- @xen-orchestra/backups 0.17.1
- xo-server 5.85.1
## **5.65.2** (2021-12-10)
### Bug fixes
- [Backup] Fix `handler.rmTree` is not a function (Forum [5256](https://xcp-ng.org/forum/topic/5256/s3-backup-try-it/29) PR [#6041](https://github.com/vatesfr/xen-orchestra/pull/6041) )
- [Backup] Fix `EEXIST` in logs when multiple merge tasks are created at the same time ([Forum #5301](https://xcp-ng.org/forum/topic/5301/warnings-errors-in-journalctl))
- [Backup] Fix missing backup on restore (Forum [5256](https://xcp-ng.org/forum/topic/5256/s3-backup-try-it/29) (PR [#6048](https://github.com/vatesfr/xen-orchestra/pull/6048))
### Released packages
- @xen-orchestra/fs 0.19.2
- vhd-lib 2.0.3
- @xen-orchestra/backups 0.16.2
- xo-server 5.84.3
- @xen-orchestra/proxy 0.15.5
## **5.65.1** (2021-12-03)
### Bug fixes
- [Delta Backup Restoration] Fix assertion error [Forum #5257](https://xcp-ng.org/forum/topic/5257/problems-building-from-source/16)
- [Delta Backup Restoration] `TypeError: this disposable has already been disposed` [Forum #5257](https://xcp-ng.org/forum/topic/5257/problems-building-from-source/20)
- [Backups] Fix: `Error: Chaining alias is forbidden xo-vm-backups/..alias.vhd to xo-vm-backups/....alias.vhd` when backuping a file to s3 [Forum #5226](https://xcp-ng.org/forum/topic/5256/s3-backup-try-it)
- [Delta Backup Restoration] `VDI_IO_ERROR(Device I/O errors)` [Forum #5727](https://xcp-ng.org/forum/topic/5257/problems-building-from-source/4) (PR [#6031](https://github.com/vatesfr/xen-orchestra/pull/6031))
- [Delta Backup] Fix `Cannot read property 'uuid' of undefined` when a VDI has been removed from a backed up VM (PR [#6034](https://github.com/vatesfr/xen-orchestra/pull/6034))
### Released packages
- @vates/compose 2.1.0
- vhd-lib 2.0.2
- xo-vmdk-to-vhd 2.0.1
- @xen-orchestra/backups 0.16.1
- @xen-orchestra/proxy 0.15.4
- xo-server 5.84.2
## **5.65.0** (2021-11-30)
### Highlights
- [VM] Ability to export a snapshot's memory (PR [#6015](https://github.com/vatesfr/xen-orchestra/pull/6015))
- [Cloud config] Ability to create a network cloud config template and reuse it in the VM creation [#5931](https://github.com/vatesfr/xen-orchestra/issues/5931) (PR [#5979](https://github.com/vatesfr/xen-orchestra/pull/5979))
- [Backup/logs] identify XAPI errors (PR [#6001](https://github.com/vatesfr/xen-orchestra/pull/6001))
- [lite] Highlight selected VM (PR [#5939](https://github.com/vatesfr/xen-orchestra/pull/5939))
### Enhancements
- [S3] Ability to authorize self signed certificates for S3 remote (PR [#5961](https://github.com/vatesfr/xen-orchestra/pull/5961))
### Bug fixes
- [Import/VM] Fix the import of OVA files (PR [#5976](https://github.com/vatesfr/xen-orchestra/pull/5976))
### Released packages
- @vates/async-each 0.1.0
- xo-remote-parser 0.8.4
- @xen-orchestra/fs 0.19.0
- @xen-orchestra/xapi patch
- vhd-lib 2.0.1
- @xen-orchestra/backups 0.16.0
- xo-lib 0.11.1
- @xen-orchestra/proxy 0.15.3
- xo-server 5.84.1
- vhd-cli 0.6.0
- xo-web 5.90.0
## **5.64.0** (2021-10-29)
## Highlights
- [Netbox] Support older versions of Netbox and prevent "active is not a valid choice" error [#5898](https://github.com/vatesfr/xen-orchestra/issues/5898) (PR [#5946](https://github.com/vatesfr/xen-orchestra/pull/5946))
- [Tasks] Filter out short tasks using a default filter (PR [#5921](https://github.com/vatesfr/xen-orchestra/pull/5921))
- [Host] Handle evacuation failure during host shutdown (PR [#5966](https://github.com/vatesfr/xen-orchestra/pull/#5966))
- [Menu] Notify user when proxies need to be upgraded (PR [#5930](https://github.com/vatesfr/xen-orchestra/pull/5930))
- [Servers] Ability to use an HTTP proxy between XO and a server (PR [#5958](https://github.com/vatesfr/xen-orchestra/pull/5958))
- [VM/export] Ability to copy the export URL (PR [#5948](https://github.com/vatesfr/xen-orchestra/pull/5948))
- [Pool/advanced] Ability to define network for importing/exporting VMs/VDIs (PR [#5957](https://github.com/vatesfr/xen-orchestra/pull/5957))
- [Host/advanced] Add button to enable/disable the host (PR [#5952](https://github.com/vatesfr/xen-orchestra/pull/5952))
- [Backups] Enable merge worker by default
### Enhancements
- [Jobs] Ability to copy a job ID (PR [#5951](https://github.com/vatesfr/xen-orchestra/pull/5951))
### Bug fixes
- [Backups] Delete unused snapshots related to other schedules (even no longer existing) (PR [#5949](https://github.com/vatesfr/xen-orchestra/pull/5949))
- [Jobs] Fix `job.runSequence` method (PR [#5944](https://github.com/vatesfr/xen-orchestra/pull/5944))
- [Netbox] Fix error when testing plugin on versions older than 2.10 (PR [#5963](https://github.com/vatesfr/xen-orchestra/pull/5963))
- [Snapshot] Fix "Create VM from snapshot" creating a template instead of a VM (PR [#5955](https://github.com/vatesfr/xen-orchestra/pull/5955))
- [Host/Logs] Improve the display of log content (PR [#5943](https://github.com/vatesfr/xen-orchestra/pull/5943))
- [XOA licenses] Fix expiration date displaying "Invalid date" in some rare cases (PR [#5967](https://github.com/vatesfr/xen-orchestra/pull/5967))
- [API/pool.listPoolsMatchingCriteria] Fix `checkSrName`/`checkPoolName` `is not a function` error
### Released packages
- xo-server-netbox 0.3.3
- vhd-lib 1.3.0
- xen-api 0.35.1
- @xen-orchestra/xapi 0.8.0
- @xen-orchestra/backups 0.15.1
- @xen-orchestra/proxy 0.15.2
- vhd-cli 0.5.0
- xapi-explore-sr 0.4.0
- xo-server 5.83.0
- xo-web 5.89.0
## **5.63.0** (2021-09-30)
### Highlights
- [Backup] Go back to previous page instead of going to the overview after editing a job: keeps current filters and page (PR [#5913](https://github.com/vatesfr/xen-orchestra/pull/5913))
- [Health] Do not take into consideration duplicated MAC addresses from CR VMs (PR [#5916](https://github.com/vatesfr/xen-orchestra/pull/5916))
- [Health] Ability to filter duplicated MAC addresses by running VMs (PR [#5917](https://github.com/vatesfr/xen-orchestra/pull/5917))
- [Tables] Move the search bar and pagination to the top of the table (PR [#5914](https://github.com/vatesfr/xen-orchestra/pull/5914))
- [Netbox] Handle nested prefixes by always assigning an IP to the smallest prefix it matches (PR [#5908](https://github.com/vatesfr/xen-orchestra/pull/5908))
### Bug fixes
- [SSH keys] Allow SSH key to be broken anywhere to avoid breaking page formatting (Thanks [@tstivers1990](https://github.com/tstivers1990)!) [#5891](https://github.com/vatesfr/xen-orchestra/issues/5891) (PR [#5892](https://github.com/vatesfr/xen-orchestra/pull/5892))
- [Netbox] Better handling and error messages when encountering issues due to UUID custom field not being configured correctly [#5905](https://github.com/vatesfr/xen-orchestra/issues/5905) [#5806](https://github.com/vatesfr/xen-orchestra/issues/5806) [#5834](https://github.com/vatesfr/xen-orchestra/issues/5834) (PR [#5909](https://github.com/vatesfr/xen-orchestra/pull/5909))
- [New VM] Don't send network config if untouched as all commented config can make Cloud-init fail [#5918](https://github.com/vatesfr/xen-orchestra/issues/5918) (PR [#5923](https://github.com/vatesfr/xen-orchestra/pull/5923))
### Released packages
- xen-api 0.34.3
- vhd-lib 1.2.0
- xo-server-netbox 0.3.1
- @xen-orchestra/proxy 0.14.7
- xo-server 5.82.3
- xo-web 5.88.0
## **5.62.1** (2021-09-17)
### Bug fixes
- [VM/Advanced] Fix conversion from UEFI to BIOS boot firmware (PR [#5895](https://github.com/vatesfr/xen-orchestra/pull/5895))
- [VM/network] Support newline-delimited IP addresses reported by some guest tools
- Fix VM/host stats, VM creation with Cloud-init, and VM backups, with NATted hosts [#5896](https://github.com/vatesfr/xen-orchestra/issues/5896)
- [VM/import] Very small VMDK and OVA files were mangled upon import (PR [#5903](https://github.com/vatesfr/xen-orchestra/pull/5903))
### Released packages
- xen-api 0.34.2
- @xen-orchestra/proxy 0.14.6
- xo-server 5.82.2
## **5.62.0** (2021-08-31)
### Highlights
- [Host] Add warning in case of unmaintained host version [#5840](https://github.com/vatesfr/xen-orchestra/issues/5840) (PR [#5847](https://github.com/vatesfr/xen-orchestra/pull/5847))
- [Backup] Use default migration network if set when importing/exporting VMs/VDIs (PR [#5883](https://github.com/vatesfr/xen-orchestra/pull/5883))
### Enhancements
- [New network] Ability for pool's admin to create a new network within the pool (PR [#5873](https://github.com/vatesfr/xen-orchestra/pull/5873))
- [Netbox] Synchronize primary IPv4 and IPv6 addresses [#5633](https://github.com/vatesfr/xen-orchestra/issues/5633) (PR [#5879](https://github.com/vatesfr/xen-orchestra/pull/5879))
### Bug fixes
- [VM/network] Fix an issue where multiple IPs would be displayed in the same tag when using old Xen tools. This also fixes Netbox's IP synchronization for the affected VMs. (PR [#5860](https://github.com/vatesfr/xen-orchestra/pull/5860))
- [LDAP] Handle groups with no members (PR [#5862](https://github.com/vatesfr/xen-orchestra/pull/5862))
- Fix empty button on small size screen (PR [#5874](https://github.com/vatesfr/xen-orchestra/pull/5874))
- [Host] Fix `Cannot read property 'other_config' of undefined` error when enabling maintenance mode (PR [#5875](https://github.com/vatesfr/xen-orchestra/pull/5875))
### Released packages
- xen-api 0.34.1
- @xen-orchestra/xapi 0.7.0
- @xen-orchestra/backups 0.13.0
- @xen-orchestra/fs 0.18.0
- @xen-orchestra/log 0.3.0
- @xen-orchestra/mixins 0.1.1
- xo-server-auth-ldap 0.10.4
- xo-server-netbox 0.3.0
- xo-server 5.82.1
- xo-web 5.87.0
## **5.61.0** (2021-07-30)
### Highlights
- [SR/disks] Display base copies' active VDIs (PR [#5826](https://github.com/vatesfr/xen-orchestra/pull/5826))
- [Netbox] Optionally allow self-signed certificates (PR [#5850](https://github.com/vatesfr/xen-orchestra/pull/5850))
- [Host] When supported, use pool's default migration network to evacuate host [#5802](https://github.com/vatesfr/xen-orchestra/issues/5802) (PR [#5851](https://github.com/vatesfr/xen-orchestra/pull/5851))
- [VM] shutdown/reboot: offer to force shutdown/reboot the VM if no Xen tools were detected [#5838](https://github.com/vatesfr/xen-orchestra/issues/5838) (PR [#5855](https://github.com/vatesfr/xen-orchestra/pull/5855))
### Enhancements
- [Netbox] Add information about a failed request to the error log to help better understand what happened [#5834](https://github.com/vatesfr/xen-orchestra/issues/5834) (PR [#5842](https://github.com/vatesfr/xen-orchestra/pull/5842))
- [VM/console] Ability to rescan ISO SRs (PR [#5841](https://github.com/vatesfr/xen-orchestra/pull/5841))
### Bug fixes
- [VM/disks] Fix `an error has occured` when self service user was on VM disk view (PR [#5841](https://github.com/vatesfr/xen-orchestra/pull/5841))
- [Backup] Protect replicated VMs from being started on specific hosts (PR [#5852](https://github.com/vatesfr/xen-orchestra/pull/5852))
### Released packages
- @xen-orchestra/backups 0.12.2
- @xen-orchestra/proxy 0.14.4
- xo-server-netbox 0.2.0
- xo-web 5.86.0
- xo-server 5.81.2
## **5.60.0** (2021-06-30)
### Highlights
- [VM/disks] Ability to rescan ISO SRs (PR [#5814](https://github.com/vatesfr/xen-orchestra/pull/5814))
- [VM/snapshots] Identify VM's current snapshot with an icon next to the snapshot's name (PR [#5824](https://github.com/vatesfr/xen-orchestra/pull/5824))
### Enhancements
- [OVA import] improve OVA import error reporting (PR [#5797](https://github.com/vatesfr/xen-orchestra/pull/5797))
- [Backup] Distinguish error messages between cancelation and interrupted HTTP connection
- [Jobs] Add `host.emergencyShutdownHost` to the list of methods that jobs can call (PR [#5818](https://github.com/vatesfr/xen-orchestra/pull/5818))
- [Host/Load-balancer] Log VM and host names when a VM is migrated + category (density, performance, ...) (PR [#5808](https://github.com/vatesfr/xen-orchestra/pull/5808))
- [VM/new disk] Auto-fill disk name input with generated unique name (PR [#5828](https://github.com/vatesfr/xen-orchestra/pull/5828))
### Bug fixes
- [IPs] Handle space-delimited IP address format provided by outdated guest tools [5801](https://github.com/vatesfr/xen-orchestra/issues/5801) (PR [5805](https://github.com/vatesfr/xen-orchestra/pull/5805))
- [API/pool.listPoolsMatchingCriteria] fix `unknown error from the peer` error (PR [5807](https://github.com/vatesfr/xen-orchestra/pull/5807))
- [Backup] Limit number of connections to hosts, which should reduce the occurences of `ECONNRESET`
- [Plugins/perf-alert] All mode: only selects running hosts and VMs (PR [5811](https://github.com/vatesfr/xen-orchestra/pull/5811))
- [New VM] Fix summary section always showing "0 B" for RAM (PR [#5817](https://github.com/vatesfr/xen-orchestra/pull/5817))
- [Backup/Restore] Fix _start VM after restore_ [5820](https://github.com/vatesfr/xen-orchestra/issues/5820)
- [Netbox] Fix a bug where some devices' IPs would get deleted from Netbox (PR [#5821](https://github.com/vatesfr/xen-orchestra/pull/5821))
- [Netbox] Fix an issue where some IPv6 would be deleted just to be immediately created again (PR [#5822](https://github.com/vatesfr/xen-orchestra/pull/5822))
### Released packages
- @vates/decorate-with 0.1.0
- xen-api 0.33.1
- @xen-orchestra/xapi 0.6.4
- @xen-orchestra/backups 0.12.0
- @xen-orchestra/proxy 0.14.3
- vhd-lib 1.1.0
- vhd-cli 0.4.0
- xo-server-netbox 0.1.2
- xo-server-perf-alert 0.3.2
- xo-server-load-balancer 0.7.0
- xo-server 5.80.0
- xo-web 5.84.0
## **5.59.0** (2021-05-31)
### Highlights
- [Smart backup] Report missing pools [#2844](https://github.com/vatesfr/xen-orchestra/issues/2844) (PR [#5768](https://github.com/vatesfr/xen-orchestra/pull/5768))
- [Metadata Backup] Add a warning on restoring a metadata backup (PR [#5769](https://github.com/vatesfr/xen-orchestra/pull/5769))
- [Netbox][plugin](https://xen-orchestra.com/docs/advanced.html#netbox) to synchronize pools, VMs and IPs with [Netbox](https://netbox.readthedocs.io/en/stable/) (PR [#5783](https://github.com/vatesfr/xen-orchestra/pull/5783))
### Enhancements
- [SAML] Compatible with users created with other authentication providers (PR [#5781](https://github.com/vatesfr/xen-orchestra/pull/5781))
### Bug fixes
- [SDN Controller] Private network creation failure when the tunnels were created on different devices [Forum #4620](https://xcp-ng.org/forum/topic/4620/no-pif-found-in-center) (PR [#5793](https://github.com/vatesfr/xen-orchestra/pull/5793))
### Released packages
- @xen-orchestra/emit-async 0.1.0
- @xen-orchestra/defined 0.0.1
- xo-collection 0.5.0
- @xen-orchestra/log 0.2.1
- xen-api 0.33.0
- @xen-orchestra/xapi 0.6.3
- xo-server-auth-saml 0.9.0
- xo-server-backup-reports 0.16.10
- xo-server-netbox 0.1.1
- xo-server-sdn-controller 1.0.5
- xo-web 5.82.0
- xo-server 5.79.5
## **5.58.1** (2021-05-06)
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Bug fixes
- [Backups] Better handling of errors in remotes, fix `task has already ended`
@@ -64,8 +413,6 @@
## **5.57.1** (2021-04-13)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
### Enhancements
- [Host/Load-balancer] Add option to disable migration (PR [#5706](https://github.com/vatesfr/xen-orchestra/pull/5706))
@@ -112,7 +459,7 @@
- [Proxy] _Redeploy_ now works when the bound VM is missing
- [VM template] Fix confirmation modal doesn't appear on deleting a default template (PR [#5644](https://github.com/vatesfr/xen-orchestra/pull/5644))
- [OVA VM Import] Fix imported VMs all having the same MAC addresses
- [Disk import] Fix `an error has occurred` when importing wrong format or corrupted files [#5663](https://github.com/vatesfr/xen-orchestra/issues/5663) (PR [#5683](https://github.com/vatesfr/xen-orchestra/pull/5683))
- [Disk import] Fix `an error has occurred` when importing wrong format or corrupted files [#5663](https://github.com/vatesfr/xen-orchestra/issues/5663) (PR [#5683](https://github.com/vatesfr/xen-orchestra/pull/5683))
### Released packages

View File

@@ -7,28 +7,22 @@
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [Metadata Backup] Add a warning on restoring a metadata backup (PR [#5769](https://github.com/vatesfr/xen-orchestra/pull/5769))
- [SAML] Compatible with users created with other authentication providers (PR [#5781](https://github.com/vatesfr/xen-orchestra/pull/5781))
- [Netbox] [Plugin](https://xen-orchestra.com/docs/advanced.html#netbox) to synchronize pools, VMs and IPs with [Netbox](https://netbox.readthedocs.io/en/stable/) (PR [#5783](https://github.com/vatesfr/xen-orchestra/pull/5783))
- Limit number of concurrent VM migrations per pool to `3` [#6065](https://github.com/vatesfr/xen-orchestra/issues/6065) (PR [#6076](https://github.com/vatesfr/xen-orchestra/pull/6076))
Can be changed in `xo-server`'s configuration file: `xapiOptions.vmMigrationConcurrency`
- [Proxy] Now ships a reverse proxy [PR#6072](https://github.com/vatesfr/xen-orchestra/pull/6072)
### Bug fixes
> Users must be able to say: “I had this issue, happy to know it's fixed”
- [Smart backup] Report missing pools [#2844](https://github.com/vatesfr/xen-orchestra/issues/2844) (PR [#5768](https://github.com/vatesfr/xen-orchestra/pull/5768))
- [Backup] Detect and clear orphan merge states, fix `ENOENT` errors (PR [#6087](https://github.com/vatesfr/xen-orchestra/pull/6087))
### Packages to release
> Packages will be released in the order they are here, therefore, they should
> be listed by inverse order of dependency.
>
> Global order:
>
> - @vates/...
> - @xen-orchestra/...
> - xo-server-...
> - xo-server
> - xo-web
> Rule of thumb: add packages on top.
>
> The format is the following: - `$packageName` `$version`
>
@@ -40,13 +34,7 @@
>
> In case of conflict, the highest (lowest in previous list) `$version` wins.
- @xen-orchestra/emit-async minor
- @xen-orchestra/defined patch
- xo-collection minor
- @xen-orchestra/log patch
- xen-api minor
- xo-server-auth-saml minor
- xo-server-backup-reports patch
- xo-server-netbox minor
- xo-web minor
- xo-server patch
- @xen-orchestra/backups minor
- @xen-orchestra/backups-cli minor
- @xen-orchestra/proxy minor
- xo-server minor

View File

@@ -1,27 +0,0 @@
<!--
Welcome to the issue section of Xen Orchestra!
Here you can:
- report an issue
- propose an enhancement
- ask a question
Please, respect this template as much as possible, it helps us sort
the issues :)
-->
### Context
- **XO origin**: the sources / XO Appliance
- **Versions**:
- Node: **FILL HERE**
- xo-web: **FILL HERE**
- xo-server: **FILL HERE**
### Expected behavior
<!-- What you expect to happen -->
### Current behavior
<!-- What is actually happening -->

View File

@@ -114,17 +114,18 @@ We need your feedback on this feature!
The plugin "web-hooks" needs to be installed and loaded for this feature to work.
You can trigger an HTTP POST request to a URL when a Xen Orchestra API method is called.
You can trigger an HTTP POST request to a URL when a Xen Orchestra API method is called or when a backup job runs.
- Go to Settings > Plugins > Web hooks
- Add new hooks
- For each hook, configure:
- Method: the XO API method that will trigger the HTTP request when called
- Method: the XO API method that will trigger the HTTP request when called. For backup jobs, choose `backupNg.runJob`.
- Type:
- pre: the request will be sent when the method is called
- post: the request will be sent after the method action is completed
- pre/post: both
- URL: the full URL which the requests will be sent to
- Wait for response: you can choose to wait for the web hook response before the method is actually called ("pre" hooks only). This can be useful if you need to automatically run some tasks before a certain method is called.
- Save the plugin configuration
From now on, a request will be sent to the corresponding URLs when a configured method is called by an XO client.
@@ -326,6 +327,8 @@ Synchronize your pools, VMs, network interfaces and IP addresses with your [Netb
![](./assets/netbox.png)
### Netbox side
- Go to your Netbox interface
- Configure prefixes:
- Go to IPAM > Prefixes > Add
@@ -338,15 +341,32 @@ XO will try to find the right prefix for each IP address. If it can't find a pre
- Generate a token:
- Go to Admin > Tokens > Add token
- Create a token with "Write enabled"
- Add a UUID custom field:
- The owner of the token must have at least the following permissions:
- View permissions on:
- extras > custom-fields
- ipam > prefixes
- All permissions on:
- ipam > ip-addresses
- virtualization > cluster-types
- virtualization > clusters
- virtualization > interfaces
- virtualization > virtual-machines
- Add a UUID custom field (for **Netbox 2.x**):
- Got to Admin > Custom fields > Add custom field
- Create a custom field called "uuid"
- Create a custom field called "uuid" (lower case!)
- Assign it to object types `virtualization > cluster` and `virtualization > virtual machine`
![](./assets/customfield.png)
:::tip
In Netbox 3.x, custom fields can be found directly in the site (no need to go in the admin section). It's available in "Other/Customization/Custom Fields". After creation of the `uuid` field, assign it to the object types `virtualization > cluster` and `virtualization > virtual machine`.
:::
### In Xen Orchestra
- Go to Xen Orchestra > Settings > Plugins > Netbox and fill out the configuration:
- Endpoint: the URL of your Netbox instance (e.g.: `https://netbox.company.net`)
- Unauthorized certificate: only for HTTPS, enable this option if your Netbox instance uses a self-signed SSL certificate
- Token: the token you generated earlier
- Pools: the pools you wish to automatically synchronize with Netbox
- Interval: the time interval (in hours) between 2 auto-synchronizations. Leave empty if you don't want to synchronize automatically.

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -87,3 +87,7 @@ You need to be an admin:
![Mattermost configuration](./assets/DocImg8.png)
![Mattermost](./assets/DocImg9.png)
## Web hooks
You can also configure web hooks to be sent to a custom server before and/or after a backup job runs. This won't send a formatted report but raw JSON data that you can use in custom scripts on your side. Follow the [web-hooks plugin documentation](./advanced.html#web-hooks) to configure it.

View File

@@ -26,6 +26,12 @@ Each backups' job execution is identified by a `runId`. You can find this `runId
![](./assets/log-runId.png)
## Exclude disks
During a backup job, you can avoid saving all disks of the VM. To do that is trivial: just edit the VM disk name and add `[NOBAK]` before the current name, eg: `data-disk` will become `[NOBAK] data-disk` (with a space or not, doesn't matter).
The disks marked with `[NOBAK]` will be now ignored in all following backups.
## Schedule
:::tip
@@ -283,39 +289,42 @@ When it's done exporting, we'll remove the snapshot. Note: this operation will t
### Concurrency
Concurrency is a parameter that let you define how many VMs your backup job will manage simultaneously.
:::tip
- Default concurrency value is 2 if left empty.
:::
Let's say you want to backup 50 VMs (each with 1x disk) at 3:00 AM. There are **2 different strategies**:
1. backup VM #1 (snapshot, export, delete snapshots) **then** backup VM #2 -> _fully sequential strategy_
2. snapshot all VMs, **then** export all snapshots, **then** delete all snapshots for finished exports -> _fully parallel strategy_
The first purely sequential strategy will lead to a big problem: **you can't predict when a snapshot of your data will occur**. Because you can't predict the first VM export time (let's say 3 hours), then your second VM will have its snapshot taken 3 hours later, at 6 AM. We assume that's not what you meant when you specified "backup everything at 3 AM". You would end up with data from 6 AM (and later) for other VMs.
Strategy number 2 is better in this aspect: all the snapshots will be taken at 3 AM. However **it's risky without limits**: it means potentially doing 50 snapshots or more at once on the same storage. **Since XenServer doesn't have a queue**, it will try to do all of them at once. This is also prone to race conditions and could cause crashes on your storage.
So what's the best choice? Continue below to learn how to best configure concurrency for your needs.
#### Best choice
By default the _parallel strategy_ is, on paper, the most logical one. But we need to give it some limits on concurrency.
The first purely sequential strategy will lead to the fact that: **you can't predict when a snapshot of your data will occur**. Because you can't predict the first VM export time (let's say 3 hours), then your second VM will have its snapshot taken 3 hours later, at 6 AM.
:::tip
Xen Orchestra can be connected to multiple pools at once. So the concurrency number applies **per pool**.
If you need your backup to be done at a specific time you should consider creating a specific backup task for this VM.
:::
Each step has its own concurrency to fit its requirements:
Strategy number 2 is to parallelise: all the snapshots will be taken at 3 AM. However **it's risky without limits**: it means potentially doing 50 snapshots or more at once on the same storage. **Since XenServer doesn't have a queue**, it will try to do all of them at once. This is also prone to race conditions and could cause crashes on your storage.
- **snapshot process** needs to be performed with the lowest concurrency possible. 2 is a good compromise: one snapshot is fast, but a stuck snapshot won't block the whole job. That's why a concurrency of 2 is not too bad on your storage. Basically, at 3 AM, we'll do all the VM snapshots needed, 2 at a time.
- **disk export process** is bottlenecked by XCP-ng/XenServer - so to get the most of it, you can use up to 12 in parallel. As soon a snapshot is done, the export process will start, until reaching 12 at once. Then as soon as one in those 12 is finished, another one will appear until there is nothing more to export.
- **VM export process:** the 12 disk export limit mentioned above applies to VDI exports, which happen during delta exports. For full VM exports (for example, for full backup job types), there is a built in limit of 2. This means if you have a full backup job of 6 VMs, only 2 will be exported at once.
- **snapshot deletion** can't happen all at once because the previous step durations are random - no need to implement concurrency on this one.
By default the _parallel strategy_ is, on paper, the most logical one. But you need to be careful and give it some limits on concurrency.
This is how it currently works in Xen Orchestra. But sometimes, you also want to have _sequential_ backups combined with the _parallel strategy_. That's why we introduced a sequential option in the advanced section of backup-ng:
:::tip
0 means it will be fully **parallel** for all VMs.
:::danger
High concurrency could impact your dom0 and network performances.
:::
If you job contains 50 VMs for example, you could specify a sequential backup with a limit of "25 at once" (enter 25 in the concurrency field). This means at 3 AM, we'll do 25 snapshots (2 at a time), then exports. As soon as the first VM backup is completely finished (snapshot removed), then we'll start the 26th and so on, to always keep a max of 25x VM backups going in parallel.
You should be aware of your hardware limitation when defining the best concurrency for your XCP-ng infrastructure, never put concurrency too high or you could impact your VMs performances.
The best way to define the best concurrency for you is by increasing it slowly and watching the result on backup time.
So to summarize, if you set your concurrency at 6 and you have 20 Vms to backup the process will be the following:
- We start the backup of the first 6 VMs.
- When one VM backup as ended we will launch the next VM backup.
- We're keep launching new VM backup until the 20 VMs are finished, keeping 6 backups running.
Removing the snapshot will trigger the coalesce process for the first VM, this is an automated action not triggered directly by the backup job.
## Backup modifier tags

View File

@@ -43,12 +43,6 @@ Just go into your "Backup" view, and select Delta Backup. Then, it's the same as
Unlike other types of backup jobs which delete the associated snapshot when the job is done and it has been exported, delta backups always keep a snapshot of every VM in the backup job, and uses it for the delta. Do not delete these snapshots!
## Exclude disks
During a delta backup job, you can avoid saving all disks of the VM. To do that is trivial: just edit the VM disk name and add `[NOBAK]` before the current name, eg: `data-disk` will become `[NOBAK] data-disk` (with a space or not, doesn't matter).
The disks marked with `[NOBAK]` will be now ignored in all following backups.
## Delta backup initial seed
If you don't want to do an initial full directly toward the destination, you can create a local delta backup first, then transfer the files to your destination.

View File

@@ -14,13 +14,13 @@ As you may have seen in other parts of the documentation, XO is composed of two
### NodeJS
XO needs Node.js. **Please use Node LTS version 14**.
XO needs Node.js. **Please always use latest Node LTS**.
We'll consider at this point that you've got a working node on your box. E.g:
```
$ node -v
v14.16.0
v14.17.0
```
If not, see [this page](https://nodejs.org/en/download/package-manager/) for instructions on how to install Node.
@@ -46,7 +46,7 @@ apt-get install build-essential redis-server libpng-dev git python-minimal libvh
You need to use the `git` source code manager to fetch the code. Ideally, you should run XO as a non-root user, and if you choose to, you need to set up `sudo` to be able to mount NFS remotes. As your chosen non-root (or root) user, run the following:
```
git clone -b master http://github.com/vatesfr/xen-orchestra
git clone -b master https://github.com/vatesfr/xen-orchestra
```
> Note: xo-server and xo-web have been migrated to the [xen-orchestra](https://github.com/vatesfr/xen-orchestra) mono-repository - so you only need the single clone command above

View File

@@ -1,6 +1,6 @@
# Full backups
You can schedule full backups of your VMs, by exporting them to the local XOA file-system, or directly to an NFS or SMB share. The "rentention" parameter allows you to modify how many backups are retained (by removing the oldest one).
You can schedule full backups of your VMs, by exporting them to the local XOA file-system, or directly to an NFS or SMB share. The "retention" parameter allows you to modify how many backups are retained (by removing the oldest one).
[![](./assets/backupexample.png)](https://xen-orchestra.com/blog/backup-your-xenserver-vms-with-xen-orchestra/)

View File

@@ -20,7 +20,7 @@ Once you have started the VM, you can access the web UI by putting the IP you co
:::tip
- Default Web UI credentials are `admin@admin.net` / `admin`
- Default console/SSH credentials are `xoa` / `xoa` (first login)
- Default console/SSH credentials are not set, you need to set them [as described here](troubleshooting.md#set-or-recover-xoa-vm-password).
:::
### Registration
@@ -83,13 +83,13 @@ As you may have seen in other parts of the documentation, XO is composed of two
#### NodeJS
XO needs Node.js. **Please use Node LTS 14**.
XO needs Node.js. **Please always use latest Node LTS**.
We'll consider at this point that you've got a working node on your box. E.g:
```
$ node -v
v14.16.0
v14.17.0
```
If not, see [this page](https://nodejs.org/en/download/package-manager/) for instructions on how to install Node.

View File

@@ -94,3 +94,21 @@ The global situation (resource usage) is examined **every minute**.
:::tip
TODO: more details to come here
:::
## VM anti-affinity
VM anti-affinity is a feature that prevents VMs with the same user tags from running on the same host. This functionality is available directly in the load-balancer plugin.
This way, you can avoid having pairs of redundant VMs or similar running on the same host.
Let's look at a simple example: you have multiple VMs running MySQL and PostgreSQL with high availability/replication. Obviously, you don't want to lose the replicated database inside the VMs on the same physical host. Just create your plan like this:
![](./assets/antiaffinity.png)
- Simple plan: means no active load balancing mechanism used
- Anti-affinity: we added our 2x tags, meaning any VMs with one of these tags will never run on the same host (if possible) with another VM having the same tag
You can also use the performance plan with the anti-affinity mode activated to continue to migrate non-tagged VMs.
:::tip
This feature is not limited by the number of VMs using the same tag, i.e. if you have 6 VMs with the same anti-affinity tag and 2 hosts, the plugin will always try to place 3 VMs on each host. It will distribute as much as possible the VMs fairly and it takes precedence (in the majority of the cases) over the performance algorithm.
:::

View File

@@ -320,6 +320,7 @@ You can learn more about XenServer [resource management on the Citrix Website](h
:::tip
XCP-ng doesn't limit VMs to 32 vCPU
:::
### VDI live migration
Thanks to Xen Storage Motion, it's easy to move a VM disk from one storage location to another, while the VM is running! This feature can help you migrate from your local storage to a SAN, or just upgrade your SAN without any downtime.
@@ -491,10 +492,12 @@ If you are behind a proxy, please update your `xo-server` configuration to add a
::: danger
As specified in the [documentation](https://xcp-ng.org/docs/requirements.html#pool-requirements) your pool shouldn't consist of hosts from different CPU vendors.
:::
::: warning
- Even with matching CPU vendors, in the case of different CPU models XCP-ng will scale the pool CPU ability to the CPU having the least instructions.
- All the hosts in a pool must run the same XCP-ng version.
:::
### Creating a pool
First you should add your new host to XOA by going to New > Server as described in [the relevant chapter](manage_infrastructure.md#add-a-host).

Some files were not shown because too many files have changed in this diff Show More