Compare commits

...

535 Commits

Author SHA1 Message Date
Julien Fontanet
d7e4a65ba7 WiP 2023-11-23 16:28:16 +01:00
Julien Fontanet
f26919cca7 chore(xapi/VM_import): typo snapshots → snapshot 2023-11-23 14:47:38 +01:00
Julien Fontanet
78afe122a6 chore(xo-server/proxy.checkHealth): call checkProxyHealth 2023-11-23 14:05:02 +01:00
MlssFrncJrg
a9fbcf3962 feat(xo-web/new VM): always show ISO selector (#7166)
Fixes #3464
2023-11-22 11:04:30 +01:00
Michael Bennett
887b49ebbf docs(installation): Fedora & CentOS wrong package libvhd-utils (#7200)
Under Packages the installation of package `libvhdi-utils` is incorrect for Fedora/CentOS. This should be replaced by `libvhdi-tools` instead.
2023-11-21 17:46:17 +01:00
Florent BEAUCHAMP
858ecbc217 fix(xapi/VDI_importContent): other_config entries must be strings (#7198)
Introduced byffd523679de80b36b2eacd30cc98de3c588a2b77
2023-11-21 16:55:53 +01:00
Florent BEAUCHAMP
ffd523679d feat(backups): update VDI importing status its name_label 2023-11-21 14:38:49 +01:00
Florent BEAUCHAMP
bd9db437f1 feat(xapi/VDI_importContent): store task UUID and stream length into other_config 2023-11-21 14:38:49 +01:00
Florent BEAUCHAMP
0365bacfbb feat(backups): show more detail on restored VM (#7186) 2023-11-21 12:28:53 +01:00
MlssFrncJrg
f3e0227c55 feat(xo-web/console): add disabled console message (#7161)
Fixes #6319
2023-11-21 10:39:35 +01:00
Florent BEAUCHAMP
4504141cbf refactor(backups/importIncrementalVm): move base detection to callers (#7165) 2023-11-20 14:52:59 +01:00
b-Nollet
ecbbf878d0 chore(xen-api): convert to ESM (#7181) 2023-11-20 14:32:44 +01:00
MlssFrncJrg
c1faaa3107 fix(xo-server/resource-set): fix error when changing VM resource set (#7144) 2023-11-20 14:19:27 +01:00
Julien Fontanet
59f04b4a6b chore: format with Prettier 2023-11-20 12:34:30 +01:00
Julien Fontanet
781b070e74 fix(xen-api/examples/import-vdi): params handling 2023-11-20 11:45:45 +01:00
Julien Fontanet
1911386aba chore: refresh yarn.lock 2023-11-20 09:55:07 +01:00
MlssFrncJrg
5b0339315f docs(Support): remove Partner Program (#7099) 2023-11-20 09:42:21 +01:00
b-Nollet
5fe53dfa99 refactor(xapi-explore-sr): convert to EM (#7191) 2023-11-17 16:55:11 +01:00
b-Nollet
06068cdcc6 refactor(cr-seed-cli): convert to ESM (#7192) 2023-11-17 16:46:36 +01:00
Julien Fontanet
c88cc2b020 chore(xo-server/token.create): allow 60s for expiresIn
It makes more sense for the min accepted value to be 60s than 60,001ms.
2023-11-17 10:57:48 +01:00
Pierre Donias
03de8ad481 docs(netbox): update steps and screenshots with latest version (#7182) 2023-11-16 16:06:01 +01:00
Julien Fontanet
08ba7e7253 chore: refresh yarn.lock 2023-11-16 10:21:17 +01:00
Florent BEAUCHAMP
9ca3f3df26 fix(xo-vmdk-to-vhd): improve compatibilty of ova with disk bigger than 8.2GB (#7183)
following #7047, from https://xcp-ng.org/forum/topic/7946/ova-export-not-functional?_=1700051758755

ova exported from xo with more than 8.2G data per disk can't be imported in virtual box 

tar-stream@3 pack and entry are now streams
2023-11-15 16:23:33 +01:00
Mathieu
511908bb7d feat(lite/pool/VMs): ability to export selected VMs (#7174) 2023-11-15 15:29:25 +01:00
Thierry Goettelmann
4351aad312 feat(lite): new FormByteSize component (#6741) 2023-11-15 15:28:46 +01:00
Florent BEAUCHAMP
af7aa29c91 feat(nbd-client): various fixes (#6964) 2023-11-15 10:04:09 +01:00
Thierry Goettelmann
315d626055 fix(lite/story): code highlight modal path (#7180) 2023-11-15 09:58:15 +01:00
Pierre Donias
7af0899800 feat(netbox): sync XO users as Netbox tenants (#7158)
See Zammad#11356
See Zammad#17364
See Zammad#18409
2023-11-14 15:25:56 +01:00
Florent BEAUCHAMP
46ec2dfd56 fix(vmware-explorer): better handling of VM import without any storage (#7168) 2023-11-14 15:14:13 +01:00
Thierry Goettelmann
b2348474c3 fix(lite/pool): host patches list is broken if changelog property is empty (#7169) 2023-11-14 15:08:56 +01:00
Julien Fontanet
836300755a feat: release 5.88.2 2023-11-13 15:07:43 +01:00
Julien Fontanet
55c8c8a6e9 feat(xo-server): 5.126.0 2023-11-13 11:41:08 +01:00
Julien Fontanet
38e32cd24c chore: update dev deps 2023-11-13 09:48:54 +01:00
Julien Fontanet
5ceacfaf5a fix(xo-server/redis): fix searching with multiple indexes
Introduced by 36b94f745
2023-11-12 22:18:19 +01:00
Thierry Goettelmann
1ee6b106b9 feat(lite/ui): compact layout (#7159) 2023-11-10 16:26:20 +01:00
Julien Fontanet
eaef4f22d2 fix(xo-web/settings/logs): use template when reporting
Related to #7142
2023-11-10 11:33:01 +01:00
Julien Fontanet
96025df12f feat(xo-server): only create a single token per web client (and user)
Related to e07e2d3cc

Similar to 581b42fa9
2023-11-09 17:13:10 +01:00
Mathieu
a8aac295eb fix(lite/login): correctly handle login from slave (#7110) 2023-11-09 15:00:29 +01:00
Mathieu
83141989f0 feat(xolite/modals): add onClose event (#7167) 2023-11-09 10:57:24 +01:00
Julien Fontanet
9dea52281d docs(installation): explicit Redis should be started 2023-11-07 17:10:35 +01:00
Julien Fontanet
2164c72034 fix(xo-server): log redis errors
Avoid unhandled error events.
2023-11-07 16:08:41 +01:00
Julien Fontanet
0d0c38f3b5 fix(backups): create suspend VDI on correct SR
Introduced by a958fe86d
2023-11-07 15:55:14 +01:00
Pierre Donias
e5be21a590 feat(lite): 0.1.5 (#7162) 2023-11-07 15:49:12 +01:00
Julien Fontanet
bc1a8be862 chore: fix formatting 2023-11-07 14:33:16 +01:00
Mathieu
3df4dbaae7 feat: release 5.88.1 (#7163) 2023-11-07 14:30:53 +01:00
Julien Fontanet
8f2cfebda6 feat(xo-server/rest-api): add users collection 2023-11-07 12:51:12 +01:00
Julien Fontanet
0d00c1c45f chore: update dev deps 2023-11-07 12:32:38 +01:00
Mathieu
9886e06d6a feat: technical release (#7160) 2023-11-07 10:18:09 +01:00
Thierry Goettelmann
478dbdfe41 feat(lite): new modal management (#7134) 2023-11-07 09:49:02 +01:00
Florent BEAUCHAMP
2bfdb60dda fix(fs): handle object storage server not implementing Object lock (#7157) 2023-11-06 17:18:00 +01:00
Pierre Donias
cabd04470d feat(xo-server-netbox): use Netbox version instead of Netbox API version (#7138)
Netbox version is more precise (X.Y.Z instead of X.Y)
2023-11-06 17:10:10 +01:00
Florent BEAUCHAMP
f6819b23f9 fix(xo-web/dashboard): empty VDIs shouldn't be flagged as orphan (#7102)
Fixes zammad#15524
2023-11-06 13:57:44 +01:00
Julien Fontanet
c9dbcf1384 fix(proxy/backup.importVmBackup): only dispose resources at the end (#7152)
Fixes #7052

Fixes zammad#17383

When a stream is returned, the handler immediately returned a stream which disposed the resource.

Due to the disposable having a 5 mins debounce delay, the problem was only apparent after 5 mins.
2023-11-06 10:30:34 +01:00
Pierre Donias
457fec0bc8 fix(xo-server-netbox): properly delete all interfaces that don't have a UUID (#7153)
Fixes zammad#18812

Introduced by 3b1bcc67ae

The first step of synchronizing VIFs with Netbox interfaces is to clean up any interface attached to the Netbox VM that isn't found on the XO VM, *based on their UUID*, including the interfaces that don't have a UUID at all (`uuid` is `null`).

But by looping over the keyed-by-UUID collection, we could only ever delete at most *one* null-UUID interface, the other ones being dropped by `keyBy`.

Using the flat-array collection instead makes sure all the interfaces are handled.
2023-11-06 10:28:43 +01:00
Pierre Donias
db99a22244 fix(xo-web/New network): only hide bond-PIFs when creating a bonded network (#7151)
Fixes #7150
See https://xcp-ng.org/forum/topic/7918
Introduced by dbdc5f3e3b
2023-11-03 11:22:10 +01:00
Pierre Donias
89d8adc6c6 feat(netbox): expose raw HTTP body if it cannot be JSON-parsed (#7146) 2023-11-02 14:28:06 +01:00
Pierre Donias
a3ea70c61c fix(xo-server-netbox): fix site property null/undefined cases (#7145)
Introduced by 1d7559ded2
2023-10-31 16:16:38 +01:00
Mathieu
ae0f3b4fe0 feat: release 5.88.0 (#7143) 2023-10-31 14:58:50 +01:00
Mathieu
2552ef37d2 feat: technical release (#7141) 2023-10-31 10:09:35 +01:00
Pierre Donias
9803e8c6cb feat(xo-web/patches): warning about updating pool master first (#7140) 2023-10-31 09:51:17 +01:00
Florent BEAUCHAMP
3410cbc3b9 fix(backups): use VDI virtual_size instead of physical_size
`physical_size` appears to be broken
2023-10-30 15:55:30 +01:00
Florent BEAUCHAMP
93fce0d4bf fix(backups): pass type to Xapi#getRecord
Fixes #7131

Introduced by 37b211376
2023-10-30 15:55:30 +01:00
MlssFrncJrg
dbdc5f3e3b feat(xo-web/New network): don't show PIFs that belong to a bond (#7136) 2023-10-30 15:47:38 +01:00
Julien Fontanet
581b42fa9d feat(xo-cli): only create a single token per instance (and user) 2023-10-30 15:47:17 +01:00
Julien Fontanet
e07e2d3ccd feat(xo-server/token): client info support 2023-10-30 15:47:17 +01:00
Mathieu
ad928ec23d fix(xo-web/licenses/XOSTOR): various fixes on XOSTOR licenses (#7137)
Introduced by #6983
2023-10-30 14:55:11 +01:00
Pierre Donias
1d7559ded2 fix(xo-server-netbox/VM): explicitly assign site (#7124)
See Zammad#17766
See https://xcp-ng.org/forum/topic/7887
2023-10-30 11:32:12 +01:00
Mathieu
9099b58557 feat: technical release (#7132) 2023-10-27 16:13:04 +02:00
Julien Fontanet
9e70397240 fix(xo-server/redis): fix indexes handling
Introduced by 225a67ae3
2023-10-27 11:27:25 +02:00
Thierry Goettelmann
5f69b0e9a0 feat(lite/console): new console toolbar (#7088) 2023-10-27 10:27:51 +02:00
Julien Fontanet
2a9bff1607 chore(xo-server/importConfig): don't use deptree 2023-10-27 10:14:02 +02:00
Pierre Donias
9e621d7de8 feat(lite/header): replace logo with "XO LITE" (#7118) 2023-10-27 09:16:28 +02:00
Mathieu
3e5c73528d feat(xo-server,xo-web/XOSTOR): XOSTOR implementation (#6983)
See https://xcp-ng.org/forum/topic/5361
2023-10-26 16:58:59 +02:00
Pierre Donias
397b5cd56d fix(xo-server/snapshot): allow self user that is member of a group to snapshot (#7129)
Introduced by a88798cc22
See Zammad#18478
2023-10-26 16:08:43 +02:00
Julien Fontanet
55cb6042e8 chore(yarn.lock): update dev deps 2023-10-26 11:00:14 +02:00
Pierre Donias
339d920b78 feat(xo-web/proxy): ability to open support tunnel on XO Proxy (#7127)
Requires #7126
2023-10-25 17:26:06 +02:00
Julien Fontanet
f14f716f3d feat(xo-server/api): proxy.openSupportTunnel (#7126)
The goal is to provide an easier way for the support team to open a tunnel on a proxy appliance.

This is the server side of this feature.
2023-10-25 17:12:17 +02:00
Julien Fontanet
fb83d1fc98 feat(xo-server/api): ignorable parameters (#7125) 2023-10-25 15:49:41 +02:00
Julien Fontanet
62208e7847 fix(xo-server-transport-xmpp): fix loading (#7082)
Fixes https://xcp-ng.org/forum/post/66402

Introduced by d6fc86b6b
2023-10-25 14:36:40 +02:00
Julien Fontanet
df91772f5c chore(xo-server/server): use builtin (un)serialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
cf8a9d40be chore(xo-server/remote): use builtin (un)serialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
93d1c6c3fc chore(xo-server/plugin-metadata): use builtin (un)serialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
f1fa811e5c chore(xo-server/user): use builtin (un)serialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
5a9812c492 chore(xo-server/group): use builtin (un)serialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
b53d613a64 chore(xo-server/token): use builtin unserialize 2023-10-25 11:48:53 +02:00
Julien Fontanet
225a67ae3b chore(xo-server/redis): proper (un)serialization support 2023-10-25 11:48:53 +02:00
Mathieu
c7eb7db463 feat(xo-web/about): display if XO from source is up to date (#7091)
Fixes #5934
2023-10-24 17:14:01 +02:00
Pierre Donias
edfa729672 chore(lite/assets): remove darkreader properties in SVG files (#7121) 2023-10-24 16:40:37 +02:00
Mathieu
77d9798319 fix(xo-web/vtpm): fix various an error has occured (#7122)
Introduced by 8834af65f7
Introduced by 1a1dd0531d

Fix `an error has occurred` in the VM advanced tab and on the VM creation form
if the user does not have pool permission.
2023-10-24 16:26:36 +02:00
Pierre Donias
680f1e2f07 chore(lite): serve Poppins font internally (#7117) 2023-10-24 15:19:54 +02:00
Julien Fontanet
7c009b0fc0 feat(xo-server): support reading JSON records in Redis
This allows forward compatibility with future versions which will use JSON records in the future.
2023-10-23 15:13:28 +02:00
Pierre Donias
eb7de4f2dd feat(xo-web/self): show # of VMs that belong to each Resource Set (#7114)
See Zammad#17568
2023-10-23 15:03:30 +02:00
Olivier Lambert
2378399981 docs: update project's README (#7116) 2023-10-23 14:25:03 +02:00
Florent BEAUCHAMP
37b2113763 feat(fs/s3): compute sensible chunk size for uploads 2023-10-23 10:23:50 +02:00
Florent BEAUCHAMP
5048485a85 feat(fs/s3): object lock mode need content md5
and the middleware consume addiitionnal memory
2023-10-23 10:23:50 +02:00
Florent BEAUCHAMP
9e667533e9 fix(fs/s3): throw error if upload >50GB 2023-10-23 10:23:50 +02:00
MlssFrncJrg
1fac7922b4 feat(xo-web/dashboard/health): VDIs to coalesce warning contains the number (#7111)
Fixes Zammad#17577
2023-10-20 15:53:24 +02:00
Julien Fontanet
1a0e5eb6fc chore: format with Prettier 2023-10-20 15:52:10 +02:00
Pierre Donias
321e322492 feat(xo-server/clearHost): pass optional batch size arg (#7107)
Fixes #7105
See https://github.com/xapi-project/xen-api/issues/5202
See https://github.com/xapi-project/xen-api/pull/5203

`host.evacuate`: try passing optional batch size argument.
If not supported: remove it and try again.
2023-10-19 17:03:14 +02:00
Mathieu
8834af65f7 feat(xo-server/xo-web/VM/new): VTPM creation (#7077)
See #7066
See #6802
See #7085
2023-10-19 16:48:56 +02:00
Mathieu
1a1dd0531d feat(xo-web/VM/advanced): VTPM management (#7085)
See #7066
See #6802
See #7074
2023-10-19 15:46:03 +02:00
Pierre Donias
8752487280 docs(installation): add nfs-common dependency for Debian/Ubuntu (#7108) 2023-10-18 22:50:29 +02:00
Pierre Donias
4b12a6d31d fix(xo-server-usage-report): handle null and nested stats (#7092)
Introduced by 083483645e

Fixes Zammad#18120
Fixes Zammad#18266

- Always assume that data can be `null`
- Handle edge cases where all values are `null`
- Properly handle nested RRD collections: collections have different depths (`memory`: 1, `cpus[0]`: 2, `pifs.rx[0]`: 3, ...). This PR replaces `getLastDays` which wouldn't handle those depths properly, with `getDeepLastValues` which is run on the whole stat object and doesn't assume the depth of the collections. It finds any Array at any depth and slices it to only keep the last N values.
2023-10-18 22:50:08 +02:00
Julien Fontanet
2924f82754 fix(xo-web): don't sign out on connection error (#7103)
May fix zammad#17717

Introduced by 005ab47d9
2023-10-18 18:07:16 +02:00
Pierre Donias
9b236a6191 fix(netbox/test): test custom fields first (#7104)
More atomic and it makes more sense for users to check that the Netbox
configuration is correct before doing any write operations
2023-10-18 11:56:10 +02:00
Julien Fontanet
a3b8553cec fix(xo-server,xo-web): fix total number of VDIs to coalesce (#7098)
Fixes #7016

Summing all chains does take not common chains into account, the total must be computed on the server side.
2023-10-18 11:52:43 +02:00
Pierre Donias
00a1778a6d feat(lite): set color-scheme CSS property to "dark" in dark mode (#7101) 2023-10-17 16:50:13 +02:00
MlssFrncJrg
3b6bc629bc fix(xo-web/home): fix misaligned descriptions (#7090) 2023-10-16 15:53:35 +02:00
Pierre Donias
04dfd9a02c fix(xo-server-usage-report): use @xen-orchestra/log to log errors (#7096)
Fixes Zammad#14579
Fixes Zammad#18183

Better handles error objects with a circular structure and avoids "Converting
circular structure to JSON" error on stringify
2023-10-16 10:07:57 +02:00
Pierre Donias
fb52868074 fix(xo-server/patching): always check that XS credentials are configured on XS (#7093)
Introduced by a30d962b1d
2023-10-13 16:49:04 +02:00
Pierre Donias
77d53d2abf fix(xo-server/patching): always pass xsCredentials to installPatches on XS (#7089)
Fixes Zammad#18284

Introduced by a30d962b1d
2023-10-13 11:45:17 +02:00
Julien Fontanet
6afb87def1 feat(xo-server/vm.set): support xenStoreData
Fixes #7055
2023-10-13 11:26:48 +02:00
Mathieu
8bfe293414 feat(lite/VM): add copy, snapshot single action (#7087) 2023-10-12 11:09:11 +02:00
Mathieu
2e634a9d1c feat(xapi/VTPM): ability to create, destroy VTPM (#7074) 2023-10-12 09:19:38 +02:00
Pierre Donias
bea771ca90 fix(xo-server/RPU): do not migrate VM back if already on host (#7071)
See https://xcp-ng.org/forum/topic/7802
2023-10-11 16:16:44 +02:00
Pierre Donias
99e3622f31 feat(xo-web/SelectPif): show network name (#7081)
See Zammad#17381
2023-10-10 15:59:24 +02:00
Pizzosaure
a16522241e docs(netbox): remove extra backtick (#7083)
Introduced by 3b3f927e4b
2023-10-10 14:14:15 +02:00
Julien Fontanet
b86cb12649 chore(yarn.lock): update dev deps 2023-10-09 17:06:54 +02:00
Julien Fontanet
2af74008b2 feat(xo-server-backup-reports): errors are logged as XO tasks 2023-10-09 09:35:24 +02:00
Julien Fontanet
2e689592f1 feat(xo-server-backup-reports): error when transports not enabled 2023-10-09 09:35:24 +02:00
Julien Fontanet
3f8436b58b fix(xo-server/authenticateUser): use clearLogOnSuccess
This fixes success logs not deleted due to race conditions.
2023-10-09 09:35:24 +02:00
Julien Fontanet
e3dd59d684 feat(mixins/Tasks#create): clearLogOnSuccess option 2023-10-09 09:35:24 +02:00
mathieuRA
549d9b70a9 feat(xo-web/host): allow to force smartReboot 2023-10-06 16:52:26 +02:00
mathieuRA
3bf6aae103 feat(xapi/host_smartReboot): ability to bypass blocked operations 2023-10-06 16:52:26 +02:00
Julien Fontanet
afb110c473 fix(fs/rmtree): fix huge memory usage (#7073)
Fixes zammad#15258

This adds a sane concurrency limit of 2 per depth level.

Co-authored-by: Florent BEAUCHAMP <florent.beauchamp@vates.fr>
2023-10-06 09:52:11 +02:00
Pierre Donias
8727c3cf96 docs(patches): update URLs that need to be accessible from XOA (#7075) 2023-10-05 09:45:50 +02:00
Julien Fontanet
b13302ddeb fix(xen-api/cli): dont run default export when imported by ESM
Fix a bug in `@xen-orchestra/xapi` introduced by c3e0308ad

`module.parent` is `null` when the module is the entry point but `undefined` when imported via ESM.
2023-10-04 10:06:17 +02:00
Julien Fontanet
e89ed06314 docs(installation): Node 18 required
XO is not compatible with Node > 18 for the moment, as Node 20 will
likely graduate to LTS soon, the docs must explicitly recommend 18.
2023-10-04 09:25:37 +02:00
Malcolm Scott
e3f57998f7 fix(signin): try to preserve current page across reauthentication (#7013)
If an authentication session expires or is lost for whatever reason, XO redirects to `/signin`.  This redirect generally preserves the URL fragment (hash) which contains the page selected prior to reauthentication, i.e. if the user had been in settings/servers just beforehand, they end up at `/signin#settings/servers`.  However, currently when they log back in they end up on the home page; the page they were on is forgotten.

This commit tries to send the user back to the page they were viewing before reauthentication, by preserving the URL fragment in the login form action / by appending it to the links to authentication plugins.  (Not all authentication plugins will necessarily preserve it internally, but we can optimistically try it and see; at worst the old behaviour will remain.)
2023-10-03 12:39:57 +02:00
Julien Fontanet
8cdb5ee31b chore: update dev deps 2023-10-03 11:24:51 +02:00
Pierre Donias
5b734db656 feat(lite): 0.1.4 (#7068) 2023-10-03 10:26:05 +02:00
rbarhtaoui
e853f9d04f feat(lite): display loading icon and error message when data is not fetched (#6775) 2023-10-03 10:03:44 +02:00
Mathieu
2a5e09719e feat(lite/login): add remember me checkbox (#7030) 2023-10-03 10:01:07 +02:00
Pierre Donias
3c0477e0da feat: release 5.87.0 (#7064) 2023-09-29 11:35:23 +02:00
Pierre Donias
060d1c5297 feat: technical release (#7063) 2023-09-29 10:01:45 +02:00
Julien Fontanet
55dd7bfb9c feat(backups): don't snapshot migrating VMs
Related to zammad#16108
2023-09-28 17:42:43 +02:00
Julien Fontanet
b00cf13029 feat(backups): block snapshot migration during backup
Related to zammad#16108
2023-09-28 17:42:43 +02:00
Julien Fontanet
73755e4ccf feat(xo-server/authenticateUser): log failed attempts
Related to zammad#16318
2023-09-28 17:38:57 +02:00
Julien Fontanet
a1bd96da6a feat(mixins/Tasks#create): allow any properties 2023-09-28 17:38:57 +02:00
mathieuRA
0e934c1413 feat(xo-web/host/advanced): display system disks health 2023-09-28 17:14:09 +02:00
Florent BEAUCHAMP
eb69234a8e feat(xo-server/host): implement smartctl api call 2023-09-28 17:14:09 +02:00
mathieuRA
7659d9c0be fix(xo-web/host/advanced): catch error for ACLs users on hyper threading plugin
it broke the componentDidMount methode and didn't update the state correctly
2023-09-28 17:14:09 +02:00
Florent BEAUCHAMP
2ba81d55f8 fix(vhd-lib/test): collision during tests (#7062)
multiple tests use the same temporary files
2023-09-28 16:49:00 +02:00
Gabriel Gunullu
2e1abad255 feat(xapi/VDI_importContent): add SR name_label to task name_label (#6979) 2023-09-28 16:10:29 +02:00
Julien Fontanet
c7d5b4b063 fix(xo-web/messages): clarify *forget tokens* description
Introduced by c7df11cc6
2023-09-28 15:41:10 +02:00
Julien Fontanet
cc5f4b0996 fix(xo-web/messages): connection token → authentication token
Uniformize naming.
2023-09-28 15:41:06 +02:00
Julien Fontanet
55f627ed83 chore: fix formatting
Introduced by 869f7ffab
2023-09-28 15:37:45 +02:00
Florent BEAUCHAMP
988179a3f0 fix(xo-server): add mbr for cloud-init only for windows VM (#7050)
Fixes zammad#16808
2023-09-28 09:09:13 +02:00
Julien Fontanet
ce617e0732 fix(xo-server/host.restart): make force defaults to false
Introduced by 5ee11c7b6
2023-09-27 17:39:10 +02:00
Florent BEAUCHAMP
f0f429a473 fix(xo-server-backup-report): send report for Mirror Backup (#7049) 2023-09-27 16:39:27 +02:00
Thierry Goettelmann
bb6e158301 feat(lite): host patches (#6709) 2023-09-27 11:44:03 +02:00
Pierre Donias
7ff304a042 feat: technical release (#7058) 2023-09-27 11:30:16 +02:00
Julien Fontanet
7df1994d7f fix(xo-server/sr.getAllUnhealthyVdiChainsLength): require admin permission
Introduced by 0975863d9
2023-09-27 10:37:30 +02:00
Mathieu
a3a2fda157 feat(lite/pool/VMs): ability to snapshot selected VMs (#7021) 2023-09-26 17:28:15 +02:00
Thierry Goettelmann
d8530f9518 chore(lite): update changelog (#7057)
Fixes [#7040](https://github.com/vatesfr/xen-orchestra/pull/7040)
2023-09-26 17:13:29 +02:00
Thierry Goettelmann
d3062ac35c feat(lite/pool/VMs): ability to migrate selected VMs (#7040) 2023-09-26 17:06:00 +02:00
Thierry Goettelmann
b11f11f4db feat(lite): rework modal system (#6994) 2023-09-26 16:25:23 +02:00
Thierry Goettelmann
79d48f3b56 feat(lite/xapi): update XenApi types and enums (#7018) 2023-09-26 15:19:33 +02:00
Pierre Donias
869f7ffab0 feat(xo-web/XOA/Support): button to restart xo-server service (#7056) 2023-09-26 14:35:17 +02:00
Julien Fontanet
6665d6a8e6 chore: format with Prettier 2023-09-26 14:34:47 +02:00
Pierre Donias
8eb0bdbda7 feat(xo-server,xo-web/SR): reclaim space (#7054)
Fixes #1204
2023-09-26 14:21:43 +02:00
Mathieu
710689db0b feat(xo-web/home/host,pool): display product brand and version (#7027) 2023-09-26 11:16:08 +02:00
mathieuRA
801eea7e75 feat(xo-web/host/advanced): confirmation modal for download system logs 2023-09-26 11:10:22 +02:00
Julien Fontanet
7885e1e6e7 feat(xo-web/host/advanced): button do download system logs
Fixes #3968
2023-09-26 11:10:22 +02:00
Julien Fontanet
d384c746ca feat(xo-server/rest-api): export host audit and system logs
See #3968
2023-09-26 11:10:22 +02:00
Pierre Donias
a30d962b1d feat(xo-server,xo-web/patching): support new XS Updates system (#7044)
See Zammad#13416

Support for new XenServer Updates system with authentication:
- User downloads Client ID JSON file from XenServer account
- User uploads it to XO in their user preferences
- XO uses `username` and `apikey` from that file to authenticate and download updates
2023-09-26 10:29:07 +02:00
Pierre Donias
b6e078716b docs(users/auth): update GitHub plugin screenshots (#7035) 2023-09-25 16:10:00 +02:00
Julien Fontanet
34b69c7ee8 chore: refresh yarn.lock
Introduced by 90e0f2684
2023-09-25 09:10:54 +02:00
Julien Fontanet
70bf8d9620 fix(xo-web/kubernetes): handle empty searches domain field
Do not send `['']` if empty.
2023-09-25 09:08:33 +02:00
Florent BEAUCHAMP
c8bfda9cf5 fix(xo-vmdk-to-vhd): handle ova with disk position collision (#7051)
Some OVA have multiple disks with the same position, which prevent the VM from being created (error while creating VBD). Renumeroting the problematic disk works around the issue.

This may lead to unbootable VM in case the renumeroted disk was the bootable one (VMware-VirtualSAN-Witness-7.0.0-15843807.ova for example).

Fixes #7046
2023-09-22 11:44:12 +02:00
Gabriel Gunullu
1eb4c20844 fix(xo-web/kubernetes): remove required property from search domain (#7028)
Make this field optional for the cluster creation.
2023-09-22 09:46:13 +02:00
Florent BEAUCHAMP
e5c5f19219 fix(backups): mirror must not replicate themselves (#7043)
Fixes zammad#16871
2023-09-21 14:45:29 +02:00
Florent BEAUCHAMP
db92f0e365 fix(vhd-lib): VhdFile implementation is not compatible with encrypted remote (#7045) 2023-09-21 11:18:44 +02:00
Adocentyn
570de7c0fe feat: add licenses (#7042) 2023-09-21 10:28:31 +02:00
Florent BEAUCHAMP
90e0f26845 fix(xo-server): ova export with files bigger than 8.2GB (#7047)
Following 15f69a1

Updating tar-stream to latest version fixes support of files bigger than 8.2GB
2023-09-20 17:17:45 +02:00
Julien Fontanet
c714bc3518 fix(stream-reader): requires Node >=12.3 2023-09-18 09:51:14 +02:00
Julien Fontanet
48e0acda32 chore: update dev deps 2023-09-18 09:43:13 +02:00
Thierry Goettelmann
013cdbcd96 fix(lite/composable): useSubscriber is disabled by default (#7041) 2023-09-15 12:01:20 +02:00
Julien Fontanet
fdd886f213 chore(xo-web/jobs): use set for user ids 2023-09-15 11:05:23 +02:00
Julien Fontanet
de70ef3064 chore(xo-web/jobs): use addSubscriptions for all subs 2023-09-15 11:05:23 +02:00
Julien Fontanet
9142a95f79 feat(xo-web/addSubscriptions): support initial values 2023-09-15 11:05:23 +02:00
Julien Fontanet
1c6aebf997 fix(xo-web/jobs): make schedules a computed
Fixes #6968

The schedules did not appear if the jobs subscription triggered after the schedules one.

The logic has been moved to a computed depending on both subscriptions.
2023-09-15 11:05:23 +02:00
Julien Fontanet
7b9ec4b7a7 chore(xo-web/_getScheduleJob): remove unnecessary sort 2023-09-15 11:05:23 +02:00
Julien Fontanet
decb87f0c9 chore(xo-web/_getScheduleJob): explicit comparison 2023-09-15 11:05:23 +02:00
Julien Fontanet
e17470f56c chore(xo-web/_getScheduleJob): fix comment 2023-09-15 11:05:23 +02:00
Julien Fontanet
99ddbcdc67 fix(xo-web/_getScheduleJob): jobs can be undefined
Related to #6968
2023-09-15 11:05:23 +02:00
Pierre Donias
6953e2fe7b fix(xo-web/backup/mirror): submit button: "Edit" → "Save" (#7036) 2023-09-13 10:06:30 +02:00
Pierre Donias
beb1063ba1 fix(xo-server-auth-github): bad argument passed to registerUser2 (#7032)
Introduced by 562401ebe4
2023-09-12 11:39:55 +02:00
Pierre Donias
7773edd590 fix(xo-server-auth-google): bad argument passed to registerUser2 (#7031)
Introduced by 91b19d9bc4
See https://xcp-ng.org/forum/topic/7729
2023-09-12 11:21:28 +02:00
Julien Fontanet
0104649b84 fix(xo-server/importVmBackupNg): set result when restoring via XO Proxy (#7026) 2023-09-10 18:32:34 +02:00
Pierre Donias
1c9d1049e0 fix(xo-web/render-xo-item/PIF): hide parenthesis if no info inside (#7022)
See Zammad#17381
2023-09-08 10:45:28 +02:00
Pierre Donias
d992a4cb87 feat(netbox): don't delete VMs and interfaces that don't have a UUID (#7008)
See https://xcp-ng.org/forum/topic/7639

In an effort of not deleting or overwriting useful data that has been added
manually by the user, this reverts the feature of deleting VMs and interfaces
that are not bound to an XO object via their custom field UUID. Such objects:
- shouldn't exist in normal use cases anyway
- aren't an issue for the Netbox sync
- are easy to clean manually
2023-09-08 10:36:16 +02:00
Julien Fontanet
52114ad4b0 docs(backup_troubleshooting): unexpected key/full (#7023)
Co-authored-by: Jon Sands <fohdeesha@gmail.com>
2023-09-08 09:56:21 +02:00
Julien Fontanet
bcc62cfcaf feat: release 5.86.1 2023-09-07 16:45:50 +02:00
Julien Fontanet
60434b136a feat(xo-web): 5.124.1 2023-09-06 16:55:52 +02:00
Julien Fontanet
13f3c8851d feat(xo-server): 5.122.0 2023-09-06 16:55:33 +02:00
Julien Fontanet
f386f94dc2 feat(@xen-orchestra/proxy): 0.26.33 2023-09-06 16:52:02 +02:00
Julien Fontanet
fda1fd1a04 feat(xen-api): 1.3.6 2023-09-06 16:51:34 +02:00
Thierry
0b17bdd9bc fix(lite/types): type of ObjectLink component is broken 2023-09-06 09:03:32 +00:00
Thierry
2c5706a89b fix(lite/types): issue with createUseCollection typing in VSCode 2023-09-06 09:03:32 +00:00
rbarhtaoui
5448452b71 fix(xo-web): fix naming conflict for duplicate variables (#7019)
Introduced by c9244b2
2023-09-05 12:28:25 +02:00
Julien Fontanet
22e7c126e6 fix(xen-api): set hostnameRaw before creating transport
Fixes zammad#17423

Introduced by 158a8e14a

Fix XML-RPC transport.
2023-09-05 10:52:11 +02:00
Julien Fontanet
750fefe957 fix(xo-web): don't delete other user's auth tokens
Fixes zammad#17276
2023-09-05 10:39:33 +02:00
Julien Fontanet
025e671989 feat(xo-server/api): split token.delete to token.deleteOwn
So that the behavior is more consistent.
2023-09-05 10:39:33 +02:00
Thierry Goettelmann
df0ed5e794 feat(lite): implement useContext composable (#6991) 2023-09-05 10:05:42 +02:00
Manon Mercier
da45ace7c1 docs(manage_infrastructure): info about Dashboard/Health (#7003)
Related to #5678 

Co-authored-by: Jon Sands <fohdeesha@gmail.com>
2023-09-04 15:48:18 +02:00
Manon Mercier
2a623b8ae7 docs(manage_infrastructure): manage dom0 memory (#6916) 2023-09-04 15:26:55 +02:00
Pierre Donias
f034ec45f3 feat(lite): 0.1.3 (#7011) 2023-09-01 13:42:38 +02:00
Pierre Donias
970bc0ac5d fix(lite/stories): bad import path for POWER_STATE (#7012)
Introduced by 5e8539865f
2023-09-01 11:07:49 +02:00
Mathieu
3abbc8d57e feat: release 5.86.0 (#7010) 2023-08-31 14:53:04 +02:00
Mathieu
06570d78a0 feat: technical release (#7009) 2023-08-31 10:24:11 +02:00
Florent BEAUCHAMP
6a0df7aec2 feat(fs/s3): retry on failures (#6966) 2023-08-31 09:51:28 +02:00
Julien Fontanet
30aeb95f3a fix(xo-server-audit): ignore more side-effects free methods 2023-08-31 09:26:23 +02:00
Julien Fontanet
36d6d53a26 fix(xo-web/jobs/schedules): order jobs by name
Fixes https://xcp-ng.org/forum/post/64825
2023-08-30 22:27:33 +02:00
Thierry Goettelmann
895773b6c6 feat(lite): add alarms to pool dashboard (#6976)
* feat(lite): new iteration for XenApi, stores and subscriptions

* feat(lite/xen-api): enhance XenApi typings and utils

* feat(lite): add subscription dependencies for xen-api records stores

* feat(lite/xen-api): use generics for XenApiEvent

* feat(lite/xen-api): add load error handling

* feat(lite/store): simplify alarm store onRemove

* feat(lite): add alarms to pool dashboard

* feat(lite/alarms): rename type

* feat(lite/object-link): merge useStore and routeName configs + better naming

* feat(lite/object-link): typing enhancement + loader

* feat(lite): feedback on pool dashboard alarms

* feedback
2023-08-30 17:22:41 +02:00
Pierre Donias
8ebc0dba4f feat(netbox): primary IP: fallback to next IPs in the list of addresses
Fixes #6978
2023-08-30 17:02:46 +02:00
Pierre Donias
006f12f17f feat(netbox): do not throw when IP cannot be parsed
See https://xcp-ng.org/forum/topic/7625
2023-08-30 17:02:46 +02:00
Pierre Donias
b22239804a fix(netbox): properly remove deleted Netbox IPs from local collection 2023-08-30 17:02:46 +02:00
Pierre Donias
afd174ca21 feat(netbox): handle empty collections in request() method 2023-08-30 17:02:46 +02:00
Pierre Donias
27c6c1b896 chore(netbox): use lodash find/filter where relevant 2023-08-30 17:02:46 +02:00
Florent BEAUCHAMP
311b420b74 feat(backups): expose preferNbd setting per backup job (#6995) 2023-08-30 16:59:36 +02:00
Julien Fontanet
e403298140 fix(CHANGELOG.unreleased): add missing xo-server
Introduced by 9c7fd94a9
2023-08-30 16:13:50 +02:00
Julien Fontanet
9c7fd94a9b fix(xo-server/rest-api): limit applies at the end for {backups,restore}/logs
Fixes https://xcp-ng.org/forum/post/64880
2023-08-30 16:02:42 +02:00
Mathieu
8cdae83150 feat: technical release (#7007) 2023-08-30 10:37:46 +02:00
Pierre Donias
5b1cc7415e fix(xo-server/normalizeVmNetworks): always assume multiple space-delimited IPs (#6990)
See https://xcp-ng.org/forum/topic/7625
2023-08-30 09:56:08 +02:00
Florent BEAUCHAMP
f5d3bc1f2d feat(backups): merge worker concurrency (#6965) 2023-08-30 09:38:37 +02:00
Julien Fontanet
ba81d0e08a feat(xo-server-transport-email): add local hostname to config (#6988)
Fixes https://xcp-ng.org/forum/topic/7579/
2023-08-29 23:22:31 +02:00
Pierre Donias
3b3f927e4b docs(netbox): steps and labels better match Netbox UI (#6986)
See https://xcp-ng.org/forum/topic/7625
2023-08-29 15:42:17 +02:00
Thierry Goettelmann
5e8539865f feat(lite): new iteration for XenApi, stores and subscriptions (#6998) 2023-08-29 15:23:18 +02:00
Florent BEAUCHAMP
3a3fa2882c fix(backup/healthcheck): mirror backup appeared detached (#7000) 2023-08-29 14:52:44 +02:00
Thierry Goettelmann
3baa37846e feat(lite/stories): allow to organize stories in subdirectories (#6992) 2023-08-21 11:12:16 +02:00
Mathieu
999fba2030 feat(xo-web/pool/advanced): ability to set a crash dump SR (#6973)
Fixes #5060
2023-08-18 15:34:05 +02:00
Mathieu
785a5857ef fix(xapi/host_smartReboot): resume VMs after enabling host (#6980)
Found when investigating https://xcp-ng.org/forum/post/60372
2023-08-17 16:22:35 +02:00
Thierry Goettelmann
067f4ac882 feat(lite): new XenApi records collection system (#6975) 2023-08-17 15:22:33 +02:00
Julien Fontanet
8a26e08102 feat(xo-server/rest-api): filter/limit support for {backups/restore}/logs
Fixes https://xcp-ng.org/forum/post/64789
2023-08-17 13:59:32 +02:00
Julien Fontanet
42aa202f7a fix(xo-server/job.set): accept userId
Fixes https://xcp-ng.org/forum/post/64668
2023-08-16 15:13:00 +02:00
Julien Fontanet
403d2c8e7b fix(mixins/Tasks): behave when no user connected to API
Introduced by 1ddbe87d0
2023-08-11 11:27:08 +02:00
Julien Fontanet
ad46bde302 feat(backups/XO metadata): transfer binary config in base64 2023-08-10 15:39:34 +02:00
Julien Fontanet
1b6ec2c545 fix(xo-web/home): don't search in linked objects (#6881)
Introduced by 5928984069

For instance, searching the UUID of a running VM was showing all other VMs on the same host due to the UUID being present in their `container.residentVms`.
2023-08-10 14:42:07 +02:00
Julien Fontanet
56388557cb fix(xo-server): increase timeout when file restore via XO Proxy
Related to zammad#13396
2023-08-10 11:37:08 +02:00
Julien Fontanet
1ddbe87d0f feat(mixins/Tasks): inject userId in tasks 2023-08-09 16:18:29 +02:00
Pierre Donias
3081810450 feat(xo-server-netbox): synchronize VM tags
Fixes #5899
See Zammad#12478
See https://xcp-ng.org/forum/topic/6902
2023-08-08 15:23:57 +02:00
Pierre Donias
155be7fd95 fix(netbox): add missing trailing / in URL 2023-08-08 15:23:57 +02:00
Pierre Donias
ef960e94d3 chore(netbox): namespace all XO objects as xo* 2023-08-08 15:23:57 +02:00
Pierre Donias
bfd99a48fe chore(netbox): namespace all Netbox objects as nb* 2023-08-08 15:23:57 +02:00
Florent BEAUCHAMP
a13fda5fe9 fix(backups/_MixinXapiWriter): typo _heathCheckSr → _healthCheckSr (#6969)
Fix `TypeError: Cannot read properties of undefined (reading 'uuid') at #isAlreadyOnHealthCheckSr`
2023-08-08 09:48:53 +02:00
Florent BEAUCHAMP
66bee59774 fix(xen-api/getResource): don't fail silently when HTTP request fails without response (#6970)
Seen while investigating zammad#16309
2023-08-08 09:39:18 +02:00
Julien Fontanet
685400bbf8 fix(xo-server): fix get-stream@3 usage
Fixes #6971

Introduced by 3dca7f2a7
2023-08-08 08:05:38 +02:00
Julien Fontanet
5bef8fc411 fix(lite): disable linting because it's broken
Introduced by 3dca7f2a7
2023-08-05 17:05:02 +02:00
Julien Fontanet
aa7ff1449a fix(lite): adapt ESLint config to prettier@3
Introduced by 3dca7f2a7
2023-08-04 22:09:55 +02:00
Julien Fontanet
3dca7f2a71 chore: update deps 2023-08-03 17:56:24 +02:00
Julien Fontanet
3dc2f649f6 chore: format with Prettier 2023-08-03 17:56:24 +02:00
Julien Fontanet
9eb537c2f9 chore: update dev deps 2023-08-03 17:56:24 +02:00
Thierry Goettelmann
dfd5f6882f feat(lite): enhance typings for improved type safety (#6949) 2023-08-03 11:33:29 +02:00
Julien Fontanet
7214016338 fix(xo-server/_authenticateUser): don't use registerUser()
Introduced by 99605bf18
2023-08-03 10:25:15 +02:00
Julien Fontanet
606e3c4ce5 docs(xo-server-test-plugin): explain configurationPresets 2023-08-03 10:21:15 +02:00
Julien Fontanet
fb04d3d25d docs(xo-server-test-plugin): show title/description for settings 2023-08-03 10:20:51 +02:00
Julien Fontanet
db8c042131 fix(xo-web/plugins): merge preset with existing config
Instead of replacing it.
2023-08-03 10:14:07 +02:00
Julien Fontanet
fd9005fba8 fix(xo-web/plugins): don't disable presets when config not edited 2023-08-03 10:12:52 +02:00
Julien Fontanet
2d25413b8d fix(xo-server-auth-ldap): mark userIdAttribute as required
It can no longer be ommited since 99605bf18
2023-08-03 09:56:33 +02:00
Julien Fontanet
035679800a chore(xo-server-auth-ldap): defaults are merged automatically by xo-server
Related to 8c7d25424
2023-08-03 09:53:01 +02:00
Thierry Goettelmann
abd0a3035a feat(lite/component): created UiResources + UiResource (#6932) 2023-08-01 11:18:10 +02:00
Julien Fontanet
d307730c68 feat: release 5.85.0 2023-07-31 17:06:25 +02:00
Julien Fontanet
1b44de4958 feat(xo-server): 5.120.2 2023-07-31 16:52:03 +02:00
Julien Fontanet
ec78a1ce8b feat(xo-web): 5.122.2 2023-07-31 16:32:42 +02:00
Julien Fontanet
19c82ab30d feat(xo-server): 5.120.1 2023-07-31 16:32:41 +02:00
Julien Fontanet
9986f3fb18 fix(xo-web/removeUserAuthProvider): notify on error
Introduced by 52cf2d151
2023-07-31 16:29:09 +02:00
Julien Fontanet
d24e9c093d fix(xo-server/updaterUser): fix current user auth protection
Introduced by 2d52aee95
2023-07-31 16:28:16 +02:00
Julien Fontanet
70c8b24fac feat(xo-web): 5.122.1 2023-07-31 15:58:15 +02:00
Julien Fontanet
9c9c11104b feat(xo-server-auth-google): 0.3.0 2023-07-31 15:58:05 +02:00
Julien Fontanet
cba90b27f4 feat(xo-server-auth-github): 0.3.0 2023-07-31 15:57:44 +02:00
Julien Fontanet
46cbced570 feat(xo-server): 5.120.0 2023-07-31 15:56:48 +02:00
Julien Fontanet
52cf2d1514 feat(xo-web/settings/users): auth providers can be removed 2023-07-31 15:48:49 +02:00
Julien Fontanet
e51351be8d feat(xo-server/api): user.removeAuthProvider 2023-07-31 15:48:49 +02:00
Julien Fontanet
2a42e0ff94 feat(xo-web/users): display users auth providers
Related to zammad#16318
2023-07-31 15:48:49 +02:00
Julien Fontanet
3a824a2bfc fix(xo-server/updateUser): check password xor auth providers 2023-07-31 15:48:49 +02:00
Julien Fontanet
fc1c809a18 fix(xo-server): remove password when sign in with provider
Password can no longer be used/edited when an auth provider is registered.

For security concerns, this useless password should be removed from the database.
2023-07-31 15:48:49 +02:00
Julien Fontanet
221cd40199 fix(xo-server/updateUser): can remove password 2023-07-31 15:48:49 +02:00
Julien Fontanet
aca19d9a81 fix(xo-server): user pass disabled when associated auth providers 2023-07-31 15:48:49 +02:00
Julien Fontanet
0601bbe18d fix(xo-server/recover-account): remove all auth providers 2023-07-31 15:48:49 +02:00
Julien Fontanet
2d52aee952 fix(xo-server/updateUser): can remove all auth providers with null 2023-07-31 15:48:49 +02:00
Julien Fontanet
99605bf185 feat(xo-server/registerUser): completely disable 2023-07-31 15:48:49 +02:00
Julien Fontanet
91b19d9bc4 feat(xo-server-auth-google): use registerUser2 2023-07-31 15:48:49 +02:00
Julien Fontanet
562401ebe4 feat(xo-server-auth-github): use registerUser2 2023-07-31 15:48:49 +02:00
Julien Fontanet
6fd2f2610d fix(xo-web/new-vm): don't send device in VIFs
Introduced by 6ae19b064

Fixes #6960
2023-07-31 09:34:30 +02:00
Gabriel Gunullu
6ae19b0640 fix(xo-web/new-vm): list VIFs ordered by device (#6944)
Fixes zammad#15920
2023-07-28 18:51:48 +02:00
Pierre Donias
6b936d8a8c feat(lite): 0.1.2 (#6958) 2023-07-28 17:37:07 +02:00
Thierry Goettelmann
8f2cfaae00 feat(lite): open console in new window (#6868)
Add a link to open the console in a new window.
2023-07-28 14:04:06 +02:00
Thierry Goettelmann
5c215e1a8a feat(lite/console): rework VM console page (#6863)
Rework the VM Console page to be better aligned with Figma mockup.
- Spinner while loading the console
- Added the "monitor" image with correct message when VM is powered off
- Better screen space usage
2023-07-28 11:39:33 +02:00
Pierre Donias
e3cb98124f feat: technical release (#6956) 2023-07-28 10:05:26 +02:00
Julien Fontanet
90c3319880 feat(xo-web/backup/file-restore): add export format selection 2023-07-27 17:22:58 +02:00
Julien Fontanet
348db876d2 feat(xo-server/backupNg.fetchFiles): add format param 2023-07-27 17:22:58 +02:00
Julien Fontanet
408fd7ec03 feat(proxy/backup.fetchPartitionFiles): add format param 2023-07-27 17:22:58 +02:00
Julien Fontanet
1fd84836b1 feat(backups/fetchPartitionFiles): add tgz (tar+gzip) support
Around 6 times faster than ZIP export.
2023-07-27 17:22:58 +02:00
Julien Fontanet
522204795f fix(backups/fetchPartitionFiles): rewrite ZIP creation
It's now sequential which leads to better performance and less memory consumption.

Empty directories are now included and all entries have correct mode and modification time.
2023-07-27 17:22:58 +02:00
Julien Fontanet
e29c422ac9 fix(xo-server/_handleHttpRequest): use pipeline between result and response
Properly closes one stream if the other is destroyed.
2023-07-27 17:22:58 +02:00
Florent BEAUCHAMP
152cf09b7e feat(vmware-explorer): handle sesparse files (#6909) 2023-07-27 17:15:29 +02:00
Pierre Donias
ff728099dc docs(netbox): update screenshot (#6955) 2023-07-27 17:13:57 +02:00
Mathieu
706d94221d feat(xo-server/pool/rpu): avoid unnecessary VMs migration (#6943) 2023-07-27 17:12:31 +02:00
Gabriel Gunullu
340e9af7f4 fix(backups): handle incremental replication to multiple SRs (#6811)
Fix matching previous replications when multiple SRs.

Fixes #6582
2023-07-27 17:09:15 +02:00
Pierre Donias
40e536ba61 feat(xo-server-netbox): synchronize VM platform (#6954)
See Zammad#12478
See https://xcp-ng.org/forum/topic/6902
2023-07-27 16:59:50 +02:00
Thierry Goettelmann
fd4c56c8c2 feat(lite/pool): add tasks to Pool Dashboard (#6713)
Other updates:
- Move pending/finished tasks logic to store subscription
- Add `count` prop to `UiCardTitle`
- Add "No tasks" message on Task table if empty
- Make the `finishedTasks` prop optional
- Add ability to have full width dashboard cards
2023-07-27 16:23:52 +02:00
Thierry Goettelmann
20d04ba956 feat(lite): dynamic page title (#6853)
See #6793

ℹ️ This PR adds a `pageTitleStore` which allows defining the current page title
according to 3 parts: an object, a string, and a count. Each part is optional.

 The page title is **reactive** when function argument is a `Ref`, a `Computed`
or a getter. For example, when updating a VM name, the page title will be
updated in every tabs.

🪄 Each title part is automatically unset when the component that set it is
unmounted.
2023-07-27 11:41:33 +02:00
Pierre Donias
3b1bcc67ae feat(xo-server-netbox): rewrite (#6950)
Fixes #6038, Fixes #6135, Fixes #6024, Fixes #6036
See https://xcp-ng.org/forum/topic/6070
See zammad#5695
See https://xcp-ng.org/forum/topic/6149
See https://xcp-ng.org/forum/topic/6332

Complete rewrite of the plugin. Main functional changes:
- Synchronize VM description
- Fix duplicated VMs in Netbox after disconnecting one pool
- Migrating a VM from one pool to another keeps VM data added manually
- Fix largest IP prefix being picked instead of smallest
- Fix synchronization not working if some pools are unavailable
- Better error messages
2023-07-27 10:07:26 +02:00
Julien Fontanet
1add3fbf9d fix(yarn.lock): refresh
Introduced by 1c23bd5ff
2023-07-26 13:36:28 +02:00
Julien Fontanet
97f0759de0 feat(mixins/Hooks): warning every 5s if listener still running
This helps diagnosticate issues when a hook is stuck.'
2023-07-25 16:41:29 +02:00
Julien Fontanet
005ab47d9b fix(xo-web): clear token on authentication failure (#6937)
This prevents infinite refreshes when the token is deemed valid by the server
but the authentication failed for any reasons.
2023-07-25 09:49:11 +02:00
Julien Fontanet
14a0caa4c6 fix(xo-web/xoa/licenses): fix message *go TO* 2023-07-25 09:43:11 +02:00
Florent BEAUCHAMP
1c23bd5ff7 feat(read-chunk/readChunkStrict): attach read chunk to error if small text (#6940) 2023-07-20 17:01:26 +02:00
Julien Fontanet
49c161b17a fix(xo-server,xo-web): send version when probing NFS SR
Reported by @benjamreis
2023-07-20 16:46:18 +02:00
Gabriel Gunullu
18dce3fce6 test(fs): fix wrong encryption (#6945) 2023-07-20 16:32:09 +02:00
Julien Fontanet
d6fc86b6bc chore(xo-server-transport-xmpp): remove old dep node-xmpp-client
Fix possibly #6942
2023-07-20 10:54:52 +02:00
Florent BEAUCHAMP
61d960d4b1 fix(vmware-explorer): handle snapshot of 1TB+ disks 2023-07-20 10:25:28 +02:00
Florent BEAUCHAMP
02d3465832 feat(vmware-explorer): don't transform stream for raw import in thick mode 2023-07-20 10:25:28 +02:00
Florent BEAUCHAMP
4bbadc9515 feat(vmware-explorer): improve import
- use one stream instead of per block queries if possible
- retry block reading if failing
- handle unaligned end block
2023-07-20 10:25:28 +02:00
Florent BEAUCHAMP
78586291ca fix(vmware-explorer): better disk size computation 2023-07-20 10:25:28 +02:00
Florent BEAUCHAMP
945dec94bf feat(vmware-explorer): retry connection to ESXi 2023-07-20 10:25:28 +02:00
Julien Fontanet
003140d96b test(nbd-client): fix issues introduced by conversion to ESM
Introduced by 7c80d0c1e
2023-07-19 23:09:48 +02:00
Julien Fontanet
363d7cf0d0 fix(node-vsphere-soap): add missing files
Introduced by f0c94496b
2023-07-19 23:02:34 +02:00
Julien Fontanet
f0c94496bf chore(node-vsphere-soap): convert to ESM
BREAKING CHANGE
2023-07-19 11:03:56 +02:00
Julien Fontanet
de217eabd9 test(nbd-client): fix issues introduced by conversion to ESM
Introduced by 7c80d0c1e
2023-07-19 11:02:12 +02:00
Julien Fontanet
7c80d0c1e1 chore(nbd-client): convert to ESM
BREAKING CHANGE
2023-07-19 10:46:05 +02:00
Julien Fontanet
9fb749b1db chore(fuse-vhd): convert to ESM
BREAKING CHANGE
2023-07-19 10:13:35 +02:00
Julien Fontanet
ad9c59669a chore: update dev deps 2023-07-19 10:11:30 +02:00
Julien Fontanet
76a038e403 fix(xo-web): fix doc link to incremental/key backup interval 2023-07-19 09:48:23 +02:00
Julien Fontanet
0e12072922 fix(xo-server/pool.mergeInto): fix IPv6 handling 2023-07-18 17:28:39 +02:00
Julien Fontanet
158a8e14a2 chore(xen-api): expose bracketless IPv6 as hostnameRaw 2023-07-18 17:28:38 +02:00
Julien Fontanet
0c97910349 chore(xo-server): remove unused _mounts property
Introduced by 5c9a47b6b
2023-07-18 11:14:24 +02:00
Florent BEAUCHAMP
8347ac6ed8 fix(xo-server/xapi-stats): simplify caching (#6920)
Following #6903

- change cache system per object => per host
- update cache at the beginning of the query to handle race conditions leading to duplicate requests
- remove concurrency limit (was leading to a huge backlog of queries, and response handling is quite fast)
2023-07-18 09:47:39 +02:00
Mathieu
996abd6e7e fix(xo-web/settings/config): wording fix for XO Config Cloud Backup (#6938)
See zammad#15904
2023-07-13 10:29:36 +02:00
rbarhtaoui
de8abd5b63 feat(lite/pool/vms): ability to export selected VMs as JSON file (#6911) 2023-07-13 09:30:10 +02:00
Julien Fontanet
3de928c488 fix(xo-server-audit): ignore mirrorBackup.getAllJobs 2023-07-12 21:52:51 +02:00
Mathieu
a2a514e483 feat(lite/stats): cache stats from rrd_update (#6781) 2023-07-12 15:30:05 +02:00
rbarhtaoui
ff432e04b0 feat(lite/pool/vms): export selected VMs as CSV file (#6915) 2023-07-12 14:34:16 +02:00
Julien Fontanet
4502590bb0 fix(xapi/VM_create): work-around HVM multiplier issues (#6935)
Fixes zammad#15189
2023-07-12 10:29:14 +02:00
Thierry Goettelmann
6d440a5af5 feat(lite/components): rewrite FormInputWrapper (#6918)
Rewrite `FormInputWrapper` and update `FormInput` to match Figma design.

Slotted input will change color according to passed `warning` and/or `error` message.
2023-07-12 10:20:33 +02:00
Julien Fontanet
0840b4c359 fix(xo-server/rest-api): VDI export via NBD 2023-07-12 10:18:45 +02:00
Julien Fontanet
696ee7dbe5 fix(CHANGELOG): add 5.84.0 highlights
Introduced by f32742225
2023-07-12 10:06:15 +02:00
Thierry Goettelmann
5e23e356ce chore(lite): bundling, dynamic import, optimizations (#6910)
Added dynamic imports for views and components.

Extracted to their own bundle:
- Vue related libs (vue, vue-router, pinia etc.)
- Lodash
- Charts

Removed `vite-plugin-pages` package.

Optimize highlight/markdown loading.
2023-07-12 10:05:09 +02:00
Thierry Goettelmann
c705051a89 chore(lite): use injection keys (#6898)
Using injection keys for `provide`/`inject` to prevent errors and code
repetition.
2023-07-11 14:56:03 +02:00
Julien Fontanet
ce2b918a29 fix(xo-server): xoData ESModule import
Introduced by c3e0308ad
2023-07-10 18:15:05 +02:00
Julien Fontanet
df740b1e8e test(backups): fix issues introduced by conversion to ESM
Introduced by 1005e295b
2023-07-10 16:58:12 +02:00
Julien Fontanet
c3e0308ad0 chore(xapi): convert to ESM
BREAKING CHANGE
2023-07-10 16:45:31 +02:00
Julien Fontanet
1005e295b2 chore(backups): convert to ESM
BREAKING CHANGE
2023-07-10 16:45:13 +02:00
Julien Fontanet
b3cf58b8c0 fix(complex-matcher): import specific lodash functions (#6904)
This reduces the bundled size when the library is used in a bundled app.
2023-07-10 16:01:54 +02:00
Julien Fontanet
2652c87917 feat(xo-web/backup/restore): can open raw log (#6936) 2023-07-07 11:20:49 +02:00
Thierry Goettelmann
9e0b5575a4 feat(lite/component): new component FormSection (#6926)
Can take a `collapsible` prop in conjunction of a `collapsed` prop.
If `collapsible` is set to `true`, the style is changed and clicking the section
header will toggle the content visibility.
Collapse status updates are sent via the `update:collapsed` event.
This allows to use `collapsed` as a model, e.g.:
`<FormSection collapsible v-model:collapsed="collapsed">`
2023-07-06 15:55:05 +02:00
Thierry Goettelmann
56c089dc01 feat(lite/stories): identify models with an icon and tooltip (#6927)
When a prop is a model, add an indicator (icon + tooltip) to identify it as such.
2023-07-06 15:49:15 +02:00
Julien Fontanet
3b94da1790 docs(supported_hosts): clearer symbol to indicate EOL 2023-07-06 10:41:52 +02:00
Julien Fontanet
ec39a8e9fe docs(supported_hosts): issues on XenServer 7.2 2023-07-06 10:39:16 +02:00
Julien Fontanet
6339f971ca docs(supported_hosts): Citrix Hypervisor → XenServer
The name has been reverted back to XenServer.
2023-07-06 10:33:21 +02:00
Pierre Donias
2978ad1486 feat(lite): 0.1.1 (#6930) 2023-07-03 15:58:17 +02:00
Julien Fontanet
c0d6dc48de feat(xo-web/XO tasks): better display of start date and duration 2023-07-01 10:30:44 +02:00
Julien Fontanet
f327422254 feat: release 5.84.0 2023-06-30 20:09:44 +02:00
Julien Fontanet
938d15d31b feat(xo-web): 5.121.0 2023-06-30 19:22:38 +02:00
Julien Fontanet
5ab1ddb9cb feat(xo-server): 5.118.0 2023-06-30 19:20:29 +02:00
Mathieu
01302d7a60 feat(xo-web/settings/config): cloud backup (#6917) 2023-06-30 19:09:56 +02:00
Julien Fontanet
c68630e2d6 feat(xo-server/rest-api): provide a way to extend it 2023-06-30 18:19:09 +02:00
Julien Fontanet
db082bfbe9 fix(xo-server/rest-api): handle ids that are numbers instead of strings 2023-06-30 18:19:09 +02:00
Julien Fontanet
650d88db46 feat(xo-server/configurePlugin): can update instead of replace existing config 2023-06-30 18:19:09 +02:00
Julien Fontanet
7d1ecca669 feat(xo-server): consider *passphrase* a sensitive value 2023-06-30 18:19:09 +02:00
Thierry Goettelmann
5f71e629ae fix(lite/components): app-menu doesn't allow more than 1 submenu (#6897) 2023-06-30 15:47:56 +02:00
rbarhtaoui
68205d4676 feat(xo-web/export,import VDI): explicit import/export raw VDI (#6925)
See zammad#15254
2023-06-30 15:10:30 +02:00
Mathieu
cdb466225d feat(xo-web,xo-server): import ISO VDI from url (#6924)
Related to zammad#15254
2023-06-30 13:47:43 +02:00
Julien Fontanet
0e7fbd598f feat(docs/rest-api): alpha → beta 2023-06-30 12:00:14 +02:00
Mathieu
99147c893d feat(xo-web): add tooltip on BulkIcons (#6895) 2023-06-29 10:56:26 +02:00
Mathieu
c63fb6173d feat(xo-web/import/disk): UI improvement for ISO files (#6874)
See https://xcp-ng.org/forum/topic/7243
2023-06-29 10:51:16 +02:00
Pierre Donias
5932ada717 chore(node-vsphere-soap): make pkg public (#6923)
Make package public and run normalize-packages on it to add the `postversion`
script to its `package.json`.
2023-06-29 10:45:06 +02:00
Mathieu
0d579748d6 fix(lite): replace 'change-power-state' by 'change-state' (#6922) 2023-06-29 10:13:02 +02:00
Pierre Donias
8c5ee4eafe feat: technical release (#6921)
* feat(@xen-orchestra/fs): 4.0.1

* feat(xen-api): 1.3.3

* feat(@vates/nbd-client): 1.2.1

* feat(@vates/node-vsphere-soap): 1.0.0

* feat(@vates/task): 0.2.0

* feat(@xen-orchestra/backups): 0.39.0

* feat(@xen-orchestra/backups-cli): 1.0.9

* feat(@xen-orchestra/mixins): 0.10.2

* feat(@xen-orchestra/proxy): 0.26.29

* feat(@xen-orchestra/vmware-explorer): 0.2.3

* feat(xo-cli): 0.20.0

* feat(xo-server): 5.117.0

* feat(xo-server-auth-oidc): 0.3.0

* feat(xo-server-perf-alert): 0.3.6

* feat(xo-web): 5.120.0

* chore(CHANGELOG): update next
2023-06-28 17:10:22 +02:00
Florent BEAUCHAMP
b03935ad2f feat(backups): can limit parallel VDI transfers per VM per job (#6787) 2023-06-28 16:47:39 +02:00
Mathieu
38439cbc43 fix(xo-web): enhance RRD stats (#6903)
- fix infinite requests
- avoid duplicate requests
2023-06-28 15:17:00 +02:00
Florent BEAUCHAMP
161c20b534 feat(xo-server): add MBR to cloud-init drive (#6889) 2023-06-28 10:42:01 +02:00
Julien Fontanet
603696dad1 fix(xo-server/rest-api): reply with 204 when non content 2023-06-27 14:43:27 +02:00
Julien Fontanet
6b2ad5a7cc feat(xo-cli rest get): new --output parameter
It can be used to save the response in a file instead of parsing it.
2023-06-27 14:43:27 +02:00
Julien Fontanet
88063d4d87 fix(xo-cli rest): params now support the json: prefix
So that any values can be passed.
2023-06-26 16:21:01 +02:00
Julien Fontanet
8956a99745 feat(xo-cli rest): support patch method 2023-06-26 16:09:32 +02:00
Florent BEAUCHAMP
0f0c0ec0d0 fix(vmware-explorer): handle selef signed certifictae during download (#6908) 2023-06-26 14:24:37 +02:00
Florent BEAUCHAMP
e5932e2c33 fix(node-vsphere-soap): don't disable TLS1.2 used by ESXi (#6913) 2023-06-26 11:31:24 +02:00
Julien Fontanet
84ec8f5f3c fix(mixins/HttpProxy): fix premature close warning 2023-06-26 10:47:34 +02:00
Julien Fontanet
661c5a269f fix(mixins/HttpProxy): fix excess event listeners warning 2023-06-26 10:47:31 +02:00
Julien Fontanet
5c6d7cae66 feat(mixins/HttpProxy): debug when proxy is enabled/disabled 2023-06-26 10:41:57 +02:00
Julien Fontanet
fcc73859b7 test(node-vsphere-soap): use test
Instead of old `lab` which has a lot of vulnerable dependencies.
2023-06-23 17:42:28 +02:00
Julien Fontanet
36645b0319 test(node-vsphere-soap): use native assert
Instead of old `code` which has a lot of vulnerable dependencies.
2023-06-23 17:35:22 +02:00
Florent BEAUCHAMP
a62575e3cf docs(backups): new terminology and mirror backups (#6837)
Co-authored-by: Mathieu <70369997+MathieuRA@users.noreply.github.com>
Co-authored-by: Jon Sands <fohdeesha@gmail.com>
2023-06-23 16:33:10 +02:00
Julien Fontanet
d7af3d3c03 fix(CHANGELOG): 5.83.4 → 5.83.3 2023-06-23 14:21:45 +02:00
Julien Fontanet
130ebb7d5f Merge remote-tracking branch 'origin/5.83' 2023-06-23 14:15:34 +02:00
Julien Fontanet
2af845ebd3 feat: release 5.83.3 2023-06-23 11:11:33 +02:00
Julien Fontanet
8e4d1701e6 feat(xo-server): 5.116.4 2023-06-23 11:09:21 +02:00
Julien Fontanet
4d16b6708f feat(@xen-orchestra/proxy): 0.26.28 2023-06-23 11:09:21 +02:00
Julien Fontanet
34ee08be25 feat(@xen-orchestra/backups): 0.38.3 2023-06-23 11:09:20 +02:00
Julien Fontanet
d66a76a09e feat(xen-api): 1.3.2 2023-06-23 11:09:02 +02:00
Florent BEAUCHAMP
0d801c9766 fix(backups): fix DR not deleting older VM (#6912)
Introduced by aa36629def
2023-06-23 10:59:51 +02:00
Julien Fontanet
b82b676fdb fix(xen-api/transports/json-rpc): fix IPv6 address support
Introduced by ab96c549a
2023-06-23 10:59:27 +02:00
Gabriel Gunullu
3494c0f64f fix(xo-server-perf-alert): add conditional statement on entry (#6900)
* fix(xo-server-perf-alert): add conditional statement on entry

Test if the entry is null to handle the case where the object cannot be found,
which can happen when the user forgets to remove an element that doesn't exist anymore from
the list of the monitored machines.

Co-authored-by: Florent BEAUCHAMP <florent.beauchamp@vates.fr>

---------

Co-authored-by: Florent BEAUCHAMP <florent.beauchamp@vates.fr>
2023-06-23 09:04:32 +02:00
Florent BEAUCHAMP
311098adc2 feat(backups): use the right SR for health check during replication (#6902) 2023-06-22 11:35:47 +02:00
Julien Fontanet
58182e2083 fix(xen-api/transports/json-rpc): fix IPv6 address support
Introduced by ab96c549a
2023-06-22 11:08:50 +02:00
Julien Fontanet
a62ae43274 feat(xen-api/cli): allow specifying transport 2023-06-22 11:02:15 +02:00
Julien Fontanet
f256610e08 fix(xo-web): don't test a disabled remote after editing
Fixes https://team.vates.fr/vates/pl/xxezjup7efr7idcur9qtftcgfe
2023-06-22 08:43:04 +02:00
Gabriel Gunullu
983d048219 feat(xo-web/kubernetes): add version selection (#6880)
Fixes #6842
See xoa#122
2023-06-21 14:10:47 +02:00
Julien Fontanet
3c6033f904 fix(xo-server): close connections of deleted users
Fixes #5235
2023-06-21 12:03:06 +02:00
Julien Fontanet
ef2bd2b59d fix(xo-server): better token check on HTTP request
It now checks that the user associated with the authentication token really exists.

This fixes xo-web infinite refresh when the token stored in cookies belongs to a missing user.
2023-06-21 12:03:06 +02:00
Julien Fontanet
04d70e9aa8 chore: update dev deps 2023-06-20 18:09:09 +02:00
Julien Fontanet
a2587ffc0a fix(CHANGELOG.unreleased): missing release type for vmware-explorer
Introduced by 4c0506429
2023-06-19 09:40:33 +02:00
Julien Fontanet
6776e7bb3d fix(CHANGELOG.unreleased): missing release type for vmware-explorer
Introduced by 4c0506429
2023-06-19 09:39:53 +02:00
Florent BEAUCHAMP
4c05064294 feat(vmware-exporer): use @vates/node-vsphere-soap 2023-06-19 09:31:07 +02:00
Florent BEAUCHAMP
c135f1394f fix(node-vsphere-soap): disable tests since they need a running vsphere/esxi 2023-06-19 09:31:07 +02:00
Florent BEAUCHAMP
d68f4215f1 fix(node-vsphere-soap): better handling of self signed cert 2023-06-19 09:31:07 +02:00
Florent BEAUCHAMP
af562f3c3a chore(node-vsphere-soap): fix lint issues 2023-06-19 09:31:07 +02:00
Julien Fontanet
7b949716bc chore(node-vsphere-soap): format with Prettier 2023-06-19 09:31:07 +02:00
Florent BEAUCHAMP
d3e256289b feat(node-vsphere-soap): fork 2023-06-19 09:31:07 +02:00
Gabriel Gunullu
3688e762b1 fix(xo-web/kubernetes): change recipe description (#6878)
Introduced by eb84d4a7ef
2023-06-16 11:37:35 +02:00
Julien Fontanet
249f1a7af4 feat(backups/XO metadata): store data filename in metadata 2023-06-16 10:40:04 +02:00
Thierry Goettelmann
2de26030ff chore(lite): add type branding to XAPI record's $ref & uuid (#6884)
Type branding enhances our type safety by preventing the incorrect usage of
`XenApiRecord`'s `$ref` and `uuid`. It ensures that these types are not
interchangeable.
2023-06-15 14:01:27 +02:00
Mathieu
fcc76fb8d0 fix(xo-web/home): fix 'isHostTimeConsistentWithXoaTime.then is not a function' (#6896)
See xoa-support#15250
Introduced by 132b1a41db
2023-06-15 10:07:56 +02:00
Julien Fontanet
88d5b7095e feat(xo-web/dashboard/health): copiable orphan VDI UUIDs (#6893)
Fixes internal request by @Fohdeesha https://team.vates.fr/vates/pl/p1nsuy8gzpgxtxwrqhdzocpiaw
2023-06-15 09:45:19 +02:00
Julien Fontanet
b0e55d88de feat(xo-web): clearer display to choose new backup job type (#6894)
Fixes https://team.vates.fr/vates/pl/xsj49jtmdfgp5god81ninumr6o

- explicit replication
- separate VM and metadata backup types
- homogenize button labels
2023-06-14 10:50:59 +02:00
Mathieu
370ad3e928 feat(lite): implement "closing-confirmation" store (#6883) 2023-06-14 10:45:44 +02:00
rbarhtaoui
07bf77d2dd feat(lite/pool/VMs): ability to delete selected VMs (#6860) 2023-06-14 10:32:15 +02:00
Thierry Goettelmann
a5ec65f3c0 fix(lite): eslint error "duplicate key" (#6891) 2023-06-13 13:58:35 +02:00
Thierry Goettelmann
522b318fd9 feat(lite/dev): add keyboard shortcut to toggle language (#6888)
To make development easier, add the ability to toggle language between FR and EN
while in development mode by pressing the `L` key (the same way we can toggle
light/dark theme with `D` key)
2023-06-13 10:45:26 +02:00
Julien Fontanet
9eb2a4033f feat(xo-server-auth-oidc): make scopes configurable and include profile by default
Fixes https://xcp-ng.org/forum/post/62185
2023-06-12 22:22:47 +02:00
Julien Fontanet
e87b0c393a chore: update dev deps 2023-06-12 22:00:52 +02:00
Mathieu
1fb7e665fa fix(xo-web/home/pool): switch alert support from 'danger' to 'warning' (#6849)
Harmonize with the host home view.
2023-06-12 11:49:47 +02:00
Thierry Goettelmann
7ea476d787 feat(lite): add alarm store (#6814) 2023-06-12 10:39:37 +02:00
Thierry Goettelmann
8260d07d61 fix(lite/i18n): "coming soon" (#6887) 2023-06-12 10:39:11 +02:00
rbarhtaoui
ac0b4e6514 fix(lite/login): fix transparent login button (#6879) 2023-06-12 10:37:47 +02:00
Pierre Donias
27b2f8cf27 docs(netbox): troubleshooting tip for 403 Forbidden (#6882) 2023-06-12 09:42:25 +02:00
Thierry Goettelmann
27b5737f65 feat(lite/pool/VMs): ability to copy selected VMs (#6847) 2023-06-09 14:59:39 +02:00
Julien Fontanet
55b2e0292f docs(task): describe combined task log 2023-06-09 09:45:46 +02:00
Julien Fontanet
464d83e70f feat(xo-web): implement XO task abortion 2023-06-09 09:45:46 +02:00
Julien Fontanet
614255a73a chore(xo-web): remove now unused aborted task status 2023-06-09 09:45:46 +02:00
Julien Fontanet
90d15e1346 feat(task): remove aborted status and add abortionRequested event
BREAKING CHANGE.
2023-06-09 09:45:46 +02:00
Julien Fontanet
b0e2ea64e9 feat(xo-server/test.createTask): dynamic name and progress 2023-06-08 14:38:22 +02:00
Julien Fontanet
1da05e239d feat(task): merge custom data into properties
BREAKING CHANGE.

This makes these entries mutable during the life of the task.
2023-06-08 14:38:22 +02:00
Thierry Goettelmann
fe7f0db81f feat(lite): revamp XAPI subscription and add immediate option (#6877)
`subscribe()` now accepts an `{ immediate: false }` option.
In this case, the subscription is deferred and can be initialized later with `.start()`.
A `createSubscribe` helper has been added to create an overridden `subscribe` function.
Full documentation has been added to `docs/xen-api-record-stores.md`.
2023-06-08 14:33:38 +02:00
rbarhtaoui
983153e620 feat(lite/pool/tasks): display an error msg if data cannot be fetched (#6777) 2023-06-08 09:21:39 +02:00
Thierry Goettelmann
6fe791dcf2 feat(lite/dashboard): revamp pool dashboard (#6815)
Reworked the pool dashboard to reorder components, simplify the code, and make
the design closer to the Figma mockups.
Added a new `PoolDashboardComingSoon` component for dashboard items that are not
ready yet.
Removed `height: fit-content` from UiCard which should not be needed anymore and
have only recent (~1.4 year) support on Firefox.
2023-06-07 14:41:08 +02:00
Florent BEAUCHAMP
1ad406c7dd test(nbd-client): test secure connection 2023-06-07 10:24:14 +02:00
Florent BEAUCHAMP
4e032e11b1 fix(nbd-client/readBlocks): BigInt handling for default generator 2023-06-07 10:24:14 +02:00
Julien Fontanet
ea34516d73 test(vhd-lib): from Jest to test 2023-06-07 10:24:14 +02:00
Thierry Goettelmann
e1145f35ee feat(lite): introduce POWER_STATE and VM_OPERATION enums (#6846) 2023-06-07 10:13:29 +02:00
Thierry Goettelmann
6864775b8a fix(lite/AppMenu): AppMenu is not displayed correctly (#6819)
The visibility of AppMenu was previously constrained to its container boundaries
2023-06-07 09:22:27 +02:00
rbarhtaoui
f28721b847 feat(lite/pool/VMs): ability to change the VMs power state (#6782) 2023-06-06 15:46:24 +02:00
Julien Fontanet
2dc174fd9d test(task/combineEvents): use variable to ease test maintenance 2023-06-06 10:29:47 +02:00
Julien Fontanet
07142d0410 test(task/combineEvents): test id, start and end properties 2023-06-05 15:29:12 +02:00
Julien Fontanet
41bb16ca30 feat: release 5.83.2 2023-06-01 15:36:48 +02:00
Julien Fontanet
d8f1034858 feat: technical release 2023-06-01 14:25:08 +02:00
Julien Fontanet
52b3c49cdb feat(xo-server): 5.116.3 2023-06-01 14:24:58 +02:00
Julien Fontanet
c5cb1a5e96 feat(@xen-orchestra/proxy): 0.26.27 2023-06-01 14:24:07 +02:00
Julien Fontanet
92d9d3232c feat(@xen-orchestra/backups): 0.38.2 2023-06-01 14:23:49 +02:00
Florent BEAUCHAMP
9c4e0464f0 fix(backups): fix vm is undefined error (#6873) 2023-06-01 14:21:43 +02:00
Julien Fontanet
72d25754fd feat: release 5.83.1 2023-06-01 12:00:06 +02:00
Julien Fontanet
1465a0ba59 feat(xo-server): 5.116.2 2023-06-01 11:30:47 +02:00
Julien Fontanet
ac8ce28286 fix(xo-server): don't require start for Redis collections (2)
Missing changed from fba86bf65
2023-06-01 11:08:52 +02:00
Julien Fontanet
c4b06e1915 feat(xo-server): 5.116.1 2023-06-01 10:48:20 +02:00
Julien Fontanet
f77675a8a3 feat(@xen-orchestra/proxy): 0.26.26 2023-06-01 10:46:31 +02:00
Julien Fontanet
b907c1fd03 feat(@xen-orchestra/backups): 0.38.1 2023-06-01 10:46:15 +02:00
Julien Fontanet
fba86bf653 fix(xo-server): don't require start for Redis collections (#6872)
Introduced by 9f3b02036

Redis connection is usable right after starting the core, therefore collections can be created
on the `core started` event and does not require for the (much heavier) `start` hook to run.

This change fixes `xo-server-recover-account`.
2023-06-01 10:36:02 +02:00
Florent BEAUCHAMP
b18ebcc38d fix(backups): fix CR not deleting older VM (#6871)
scheduleId was not passed to the writers constructor. It leads to missing scheduleId in metadata (I think there is no consequence), and a bad filter to detect VM to delete after a successfull replication

Users may need to delete manually the VM created that way
2023-06-01 10:33:33 +02:00
Mathieu
4f7f18458e fix(lite/console): fix console not updating when changing VM (#6850)
Introduced by 5237fdd387

`WatchEffect` is called before `Watch` so the connection was "created" then
"cleaned"
2023-05-31 16:26:19 +02:00
Julien Fontanet
d412196052 fix(CHANGELOG): badges
Introduced by 1d140d8fd
2023-05-31 16:06:28 +02:00
Julien Fontanet
1d140d8fd2 feat: release 5.83.0 2023-05-31 16:05:18 +02:00
Thierry Goettelmann
6948a25b09 fix(lite/markdown): vue code fence are no longer detected (#6845)
The `vue-template`, `vue-script`, and `vue-style` code fences were no longer
detected, and thus were no longer highlighted.
2023-05-31 15:25:59 +02:00
Julien Fontanet
26131917e3 feat(xo-web): 5.119.1 2023-05-31 11:22:12 +02:00
Mathieu
44a0ab6d0a fix(xo-web/overview): fix isMirrorBackup is not defined (#6870) 2023-05-31 11:06:03 +02:00
Julien Fontanet
2b8b033ad7 feat: technical release 2023-05-31 09:51:53 +02:00
Julien Fontanet
3ee0b3e7df feat(xo-web): 5.119.0 2023-05-31 09:47:42 +02:00
Julien Fontanet
927a55ab30 feat(xo-server): 5.116.0 2023-05-31 09:46:41 +02:00
Julien Fontanet
b70721cb60 feat(@xen-orchestra/proxy): 0.26.25 2023-05-31 09:44:14 +02:00
Julien Fontanet
f71c820f15 feat(@xen-orchestra/backups-cli): 1.0.8 2023-05-31 09:43:59 +02:00
Julien Fontanet
74e0405a5e feat(@xen-orchestra/backups): 0.38.0 2023-05-31 09:40:48 +02:00
Julien Fontanet
79b55ba30a feat(vhd-lib): 4.5.0 2023-05-31 09:36:01 +02:00
Mathieu
ee0adaebc5 feat(xo-web/backup): UI mirror backup implementation (#6858)
See #6854
2023-05-31 09:12:46 +02:00
Julien Fontanet
83c5c976e3 feat(xo-server/rest-api): limit patches listing and RPU (#6864)
Same restriction as in the UI.
2023-05-31 08:49:32 +02:00
Julien Fontanet
18bd2c607e feat(xo-server/backupNg.checkBackup): add basic XO task 2023-05-30 16:51:43 +02:00
Julien Fontanet
e2695ce327 fix(xo-server/clearHost): explicit message on missing migration network
Fixes zammad#14882
2023-05-30 16:50:50 +02:00
Florent BEAUCHAMP
3f316fcaea fix(backups): handles task end in CR without health check (#6866) 2023-05-30 16:06:23 +02:00
Florent BEAUCHAMP
8b7b162c76 feat(backups): implement mirror backup 2023-05-30 15:21:53 +02:00
Florent BEAUCHAMP
aa36629def refactor(backup/writers): pass the vm and snapshot in transfer/run 2023-05-30 15:21:53 +02:00
Pierre Donias
ca345bd6d8 feat(xo-web/task): action to open task REST API URL (#6869) 2023-05-30 14:19:50 +02:00
Florent BEAUCHAMP
61324d10f9 fix(xo-web): VHD directory tooltip (#6865) 2023-05-30 09:27:24 +02:00
Pierre Donias
92fd92ae63 feat(xo-web): XO Tasks (#6861) 2023-05-30 09:20:51 +02:00
Julien Fontanet
e48bfa2c88 feat: technical release 2023-05-26 16:50:04 +02:00
Julien Fontanet
cd5762fa19 feat(xo-web): 5.118.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
71f7a6cd6c feat(xo-server): 5.115.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8cade8b7a feat(xo-cli): 0.19.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
696c6f13f0 feat(vhd-cli): 0.9.3 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8d923d3ba feat(xo-vmdk-to-vhd): 2.5.5 2023-05-26 16:38:38 +02:00
Julien Fontanet
1a96c1bf0f feat(@xen-orchestra/proxy): 0.26.24 2023-05-26 16:38:38 +02:00
Julien Fontanet
14a01d0141 feat(@xen-orchestra/mixins): 0.10.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
74a2a4d2e5 feat(@xen-orchestra/backups-cli): 1.0.7 2023-05-26 16:38:38 +02:00
Julien Fontanet
b13b44cfd0 feat(@xen-orchestra/backups): 0.37.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
50a164423a feat(@xen-orchestra/xapi): 2.2.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
a40d50a3bd feat(vhd-lib): 4.4.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
529e33140a feat(@xen-orchestra/fs): 4.0.0 2023-05-26 16:38:38 +02:00
Mathieu
132b1a41db fix(xo-web/host-item): display alert in host-item for host inconsistent time (#6833)
See xoa-support#14626
Introduced by aadc1bb84c
2023-05-26 16:17:04 +02:00
Julien Fontanet
75948b2977 feat(xo-server/rest-api): endpoints to list pools/hosts missing patches 2023-05-26 16:11:11 +02:00
Gabriel Gunullu
eb84d4a7ef feat(xo-web/kubernetes): add number of cp choice (#6809)
See xoa#120
2023-05-26 16:08:11 +02:00
Julien Fontanet
1816d0240e refactor(fs): separate internal and public interfaces
Public interfaces may be decorated with behaviors (e.g. concurrency limits, path rewriting) which
makes them unsuitable from being called from inside the class or its children.

Internal interfaces are now prefixed with `__`.
2023-05-26 15:32:56 +02:00
Julien Fontanet
2c6d36b63e refactor(fs): use private fields where appropriate 2023-05-26 15:32:56 +02:00
Mathieu
d9776ae8ed fix(xo-web): fix various 'an error has occurred' (#6848)
See xoa-support#14631
2023-05-26 14:45:29 +02:00
Florent BEAUCHAMP
b456394663 refactor(backups): extract method forkDeltaExport 2023-05-26 13:01:15 +02:00
Florent BEAUCHAMP
94f599bdbd refactor(backups/RemoteAdapter): extract method listAllVms 2023-05-26 13:01:08 +02:00
Florent BEAUCHAMP
d466ca143a refactor(backups/runner): Vms -> VmsXapi 2023-05-26 12:48:56 +02:00
Florent BEAUCHAMP
78ed85a49f feat(backups): add ability to read only one delta instead of the full chain 2023-05-26 12:47:42 +02:00
Florent BEAUCHAMP
c24e7f9ecd refactor(backup/remoteAdapter): readDeltaVmBackup -> readIncrementalVmBackup 2023-05-26 12:24:56 +02:00
Mathieu
98caa89625 feat(xo-web/self): add default tags for self service users (#6810)
See #6812

Add default tags for Self Service users.
2023-05-26 11:45:05 +02:00
Pierre Donias
8e176eadb1 fix(xo-web): show Suse icon when distro name is opensuse (#6852)
See #6676
See #6746
See https://xcp-ng.org/forum/topic/6965
2023-05-26 09:24:30 +02:00
Julien Fontanet
444268406f fix(mixins/Tasks): update updatedAt when marking tasks as interrupted 2023-05-25 16:06:09 +02:00
Thierry Goettelmann
7e062977d0 feat(lite/component): add new Vue component UiCardSpinner (#6806)
`UiSpinner` is often used to add a spinner inside an `UiCard`, applying similar
styles. This `UiCardSpinner` component creates a homogeneous spinner to use in
theses cases.
2023-05-25 14:00:23 +02:00
Mathieu
f4bf56f159 feat(xo-web/self): ability to share VMs by default (#6838)
See xoa-support#7420
2023-05-25 11:00:04 +02:00
Julien Fontanet
9f3b020361 fix(xo-server): create collection after connected to Redis
Introduced by 36b94f745

Redis is now connected in `start core` hook and should not be used before.

Some minor initialization stuff (namespace and version registration) where failing silently before
this fix.
2023-05-24 17:40:20 +02:00
Julien Fontanet
ef35021a44 chore(backups,xo-server): use extractOpaqueRef from @xen-orchestra/xapi
Instead of custom implementations.
2023-05-24 12:09:42 +02:00
Julien Fontanet
b74ebd050a feat(xapi/extractOpaqueRef): expose it publicly 2023-05-24 12:07:54 +02:00
Julien Fontanet
8a16d6aa3b feat(xapi/extractOpaqueRef): add searched string to error
Helps debugging.
2023-05-24 12:07:22 +02:00
Julien Fontanet
cf7393992c chore(xapi/extractOpaqueRef): named function for better stacktraces 2023-05-24 12:05:56 +02:00
Thierry Goettelmann
c576114dad feat(lite): new FormInputGroup component (#6740) 2023-05-23 16:58:39 +02:00
Julien Fontanet
deeb399046 feat(xo-server/rest-api): rolling_update pool action 2023-05-23 15:35:32 +02:00
Julien Fontanet
9cf8f8f492 chore(xo-server/rest-api): also pass xoObject to actions 2023-05-23 15:35:32 +02:00
Julien Fontanet
28b7e99ebc chore(xo-server): move RPU logic from API layer to XenServers mixin 2023-05-23 15:35:32 +02:00
rbarhtaoui
0ba729e5b9 feat(lite/pool/dashboard): display error message when data is not fetched (#6776) 2023-05-23 14:40:43 +02:00
Florent BEAUCHAMP
ac8c146cf7 refactor(backups): separate full and incremental VM runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
2ba437be31 refactor(backups): separate VMs and metadata runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
bd8bb73309 refactor(backups): move Runner, VmBackup, writers and specific method to a private folder 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
485c2f4669 refactor(backups/Backup.createRunner): factory
BREAKING CHANGE: Backup can no longer be instantiated directly.
2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
6fb562d92f refactor(backups/Backup): extract getAdaptersByRemote, RemoteTimeoutError and runTasks 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
85efdcf7b9 refactor(backups/_incrementalVm): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
fc1357d5d6 refactor(backups): _deltaVm → _incrementalVm 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
88b015bda4 refactor(backups/writers) : replication → xapi 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
b46f76cccf refactor(backups/writers): backup → remote 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
c3bb2185c2 refactor(backups/writers): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
a240853fe0 refactor(backups/_VmBackup): delta → incremental 2023-05-23 09:27:47 +02:00
Thierry Goettelmann
d7ce609940 chore(lite): upgrade dependencies (#6843) 2023-05-22 10:41:39 +02:00
Florent BEAUCHAMP
1b0ec9839e fix(xo-server): import OVA with broken VMDK size in metadata (#6824)
ova generated from oracle virtualization server seems to have the size of the vmdk
instead of the disk size in the metadata

this will cause the transfer to fail when the import try to write data

after the size of the vmdk, for example a 50GB disk make a 10GB vmdk. It will fail when import reach data in the 10-50GB range
2023-05-22 10:20:04 +02:00
725 changed files with 31472 additions and 14206 deletions

View File

@@ -1,8 +1,11 @@
'use strict'
module.exports = {
arrowParens: 'avoid',
jsxSingleQuote: true,
semi: false,
singleQuote: true,
trailingComma: 'es5',
// 2020-11-24: Requested by nraynaud and approved by the rest of the team
//

View File

@@ -33,7 +33,7 @@
"test": "node--test"
},
"devDependencies": {
"sinon": "^15.0.1",
"sinon": "^17.0.1",
"tap": "^16.3.0",
"test": "^3.2.1"
}

View File

@@ -13,12 +13,15 @@ describe('decorateWith', () => {
const expectedFn = Function.prototype
const newFn = () => {}
const decorator = decorateWith(function wrapper(fn, ...args) {
assert.deepStrictEqual(fn, expectedFn)
assert.deepStrictEqual(args, expectedArgs)
const decorator = decorateWith(
function wrapper(fn, ...args) {
assert.deepStrictEqual(fn, expectedFn)
assert.deepStrictEqual(args, expectedArgs)
return newFn
}, ...expectedArgs)
return newFn
},
...expectedArgs
)
const descriptor = {
configurable: true,

View File

@@ -29,7 +29,7 @@
"ensure-array": "^1.0.0"
},
"devDependencies": {
"sinon": "^15.0.1",
"sinon": "^17.0.1",
"test": "^3.2.1"
}
}

View File

@@ -1,9 +1,7 @@
'use strict'
const LRU = require('lru-cache')
const Fuse = require('fuse-native')
const { VhdSynthetic } = require('vhd-lib')
const { Disposable, fromCallback } = require('promise-toolbox')
import LRU from 'lru-cache'
import Fuse from 'fuse-native'
import { VhdSynthetic } from 'vhd-lib'
import { Disposable, fromCallback } from 'promise-toolbox'
// build a s stat object from https://github.com/fuse-friends/fuse-native/blob/master/test/fixtures/stat.js
const stat = st => ({
@@ -16,7 +14,7 @@ const stat = st => ({
gid: st.gid !== undefined ? st.gid : process.getgid(),
})
exports.mount = Disposable.factory(async function* mount(handler, diskPath, mountDir) {
export const mount = Disposable.factory(async function* mount(handler, diskPath, mountDir) {
const vhd = yield VhdSynthetic.fromVhdChain(handler, diskPath)
const cache = new LRU({

View File

@@ -1,6 +1,6 @@
{
"name": "@vates/fuse-vhd",
"version": "1.0.0",
"version": "2.0.0",
"license": "ISC",
"private": false,
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@vates/fuse-vhd",
@@ -15,13 +15,14 @@
"url": "https://vates.fr"
},
"engines": {
"node": ">=10.0"
"node": ">=14"
},
"main": "./index.mjs",
"dependencies": {
"fuse-native": "^2.2.6",
"lru-cache": "^7.14.0",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.4.0"
"vhd-lib": "^4.6.1"
},
"scripts": {
"postversion": "npm publish --access public"

View File

@@ -42,8 +42,8 @@ function get(node, i, keys) {
? node.value
: node
: node instanceof Node
? get(node.children.get(keys[i]), i + 1, keys)
: undefined
? get(node.children.get(keys[i]), i + 1, keys)
: undefined
}
function set(node, i, keys, value) {

View File

@@ -1,42 +0,0 @@
'use strict'
exports.INIT_PASSWD = Buffer.from('NBDMAGIC') // "NBDMAGIC" ensure we're connected to a nbd server
exports.OPTS_MAGIC = Buffer.from('IHAVEOPT') // "IHAVEOPT" start an option block
exports.NBD_OPT_REPLY_MAGIC = 1100100111001001n // magic received during negociation
exports.NBD_OPT_EXPORT_NAME = 1
exports.NBD_OPT_ABORT = 2
exports.NBD_OPT_LIST = 3
exports.NBD_OPT_STARTTLS = 5
exports.NBD_OPT_INFO = 6
exports.NBD_OPT_GO = 7
exports.NBD_FLAG_HAS_FLAGS = 1 << 0
exports.NBD_FLAG_READ_ONLY = 1 << 1
exports.NBD_FLAG_SEND_FLUSH = 1 << 2
exports.NBD_FLAG_SEND_FUA = 1 << 3
exports.NBD_FLAG_ROTATIONAL = 1 << 4
exports.NBD_FLAG_SEND_TRIM = 1 << 5
exports.NBD_FLAG_FIXED_NEWSTYLE = 1 << 0
exports.NBD_CMD_FLAG_FUA = 1 << 0
exports.NBD_CMD_FLAG_NO_HOLE = 1 << 1
exports.NBD_CMD_FLAG_DF = 1 << 2
exports.NBD_CMD_FLAG_REQ_ONE = 1 << 3
exports.NBD_CMD_FLAG_FAST_ZERO = 1 << 4
exports.NBD_CMD_READ = 0
exports.NBD_CMD_WRITE = 1
exports.NBD_CMD_DISC = 2
exports.NBD_CMD_FLUSH = 3
exports.NBD_CMD_TRIM = 4
exports.NBD_CMD_CACHE = 5
exports.NBD_CMD_WRITE_ZEROES = 6
exports.NBD_CMD_BLOCK_STATUS = 7
exports.NBD_CMD_RESIZE = 8
exports.NBD_REQUEST_MAGIC = 0x25609513 // magic number to create a new NBD request to send to the server
exports.NBD_REPLY_MAGIC = 0x67446698 // magic number received from the server when reading response to a nbd request
exports.NBD_REPLY_ACK = 1
exports.NBD_DEFAULT_PORT = 10809
exports.NBD_DEFAULT_BLOCK_SIZE = 64 * 1024

View File

@@ -0,0 +1,41 @@
export const INIT_PASSWD = Buffer.from('NBDMAGIC') // "NBDMAGIC" ensure we're connected to a nbd server
export const OPTS_MAGIC = Buffer.from('IHAVEOPT') // "IHAVEOPT" start an option block
export const NBD_OPT_REPLY_MAGIC = 1100100111001001n // magic received during negociation
export const NBD_OPT_EXPORT_NAME = 1
export const NBD_OPT_ABORT = 2
export const NBD_OPT_LIST = 3
export const NBD_OPT_STARTTLS = 5
export const NBD_OPT_INFO = 6
export const NBD_OPT_GO = 7
export const NBD_FLAG_HAS_FLAGS = 1 << 0
export const NBD_FLAG_READ_ONLY = 1 << 1
export const NBD_FLAG_SEND_FLUSH = 1 << 2
export const NBD_FLAG_SEND_FUA = 1 << 3
export const NBD_FLAG_ROTATIONAL = 1 << 4
export const NBD_FLAG_SEND_TRIM = 1 << 5
export const NBD_FLAG_FIXED_NEWSTYLE = 1 << 0
export const NBD_CMD_FLAG_FUA = 1 << 0
export const NBD_CMD_FLAG_NO_HOLE = 1 << 1
export const NBD_CMD_FLAG_DF = 1 << 2
export const NBD_CMD_FLAG_REQ_ONE = 1 << 3
export const NBD_CMD_FLAG_FAST_ZERO = 1 << 4
export const NBD_CMD_READ = 0
export const NBD_CMD_WRITE = 1
export const NBD_CMD_DISC = 2
export const NBD_CMD_FLUSH = 3
export const NBD_CMD_TRIM = 4
export const NBD_CMD_CACHE = 5
export const NBD_CMD_WRITE_ZEROES = 6
export const NBD_CMD_BLOCK_STATUS = 7
export const NBD_CMD_RESIZE = 8
export const NBD_REQUEST_MAGIC = 0x25609513 // magic number to create a new NBD request to send to the server
export const NBD_REPLY_MAGIC = 0x67446698 // magic number received from the server when reading response to a nbd request
export const NBD_REPLY_ACK = 1
export const NBD_DEFAULT_PORT = 10809
export const NBD_DEFAULT_BLOCK_SIZE = 64 * 1024

View File

@@ -1,8 +1,11 @@
'use strict'
const assert = require('node:assert')
const { Socket } = require('node:net')
const { connect } = require('node:tls')
const {
import assert from 'node:assert'
import { Socket } from 'node:net'
import { connect } from 'node:tls'
import { fromCallback, pRetry, pDelay, pTimeout, pFromCallback } from 'promise-toolbox'
import { readChunkStrict } from '@vates/read-chunk'
import { createLogger } from '@xen-orchestra/log'
import {
INIT_PASSWD,
NBD_CMD_READ,
NBD_DEFAULT_BLOCK_SIZE,
@@ -17,16 +20,14 @@ const {
NBD_REQUEST_MAGIC,
OPTS_MAGIC,
NBD_CMD_DISC,
} = require('./constants.js')
const { fromCallback, pRetry, pDelay, pTimeout } = require('promise-toolbox')
const { readChunkStrict } = require('@vates/read-chunk')
const { createLogger } = require('@xen-orchestra/log')
} from './constants.mjs'
import { Readable } from 'node:stream'
const { warn } = createLogger('vates:nbd-client')
// documentation is here : https://github.com/NetworkBlockDevice/nbd/blob/master/doc/proto.md
module.exports = class NbdClient {
export default class NbdClient {
#serverAddress
#serverCert
#serverPort
@@ -40,6 +41,7 @@ module.exports = class NbdClient {
#readBlockRetries
#reconnectRetry
#connectTimeout
#messageTimeout
// AFAIK, there is no guaranty the server answers in the same order as the queries
// so we handle a backlog of command waiting for response and handle concurrency manually
@@ -52,7 +54,14 @@ module.exports = class NbdClient {
#reconnectingPromise
constructor(
{ address, port = NBD_DEFAULT_PORT, exportname, cert },
{ connectTimeout = 6e4, waitBeforeReconnect = 1e3, readAhead = 10, readBlockRetries = 5, reconnectRetry = 5 } = {}
{
connectTimeout = 6e4,
messageTimeout = 6e4,
waitBeforeReconnect = 1e3,
readAhead = 10,
readBlockRetries = 5,
reconnectRetry = 5,
} = {}
) {
this.#serverAddress = address
this.#serverPort = port
@@ -63,6 +72,7 @@ module.exports = class NbdClient {
this.#readBlockRetries = readBlockRetries
this.#reconnectRetry = reconnectRetry
this.#connectTimeout = connectTimeout
this.#messageTimeout = messageTimeout
}
get exportSize() {
@@ -116,12 +126,24 @@ module.exports = class NbdClient {
return
}
const queryId = this.#nextCommandQueryId
this.#nextCommandQueryId++
const buffer = Buffer.alloc(28)
buffer.writeInt32BE(NBD_REQUEST_MAGIC, 0) // it is a nbd request
buffer.writeInt16BE(0, 4) // no command flags for a disconnect
buffer.writeInt16BE(NBD_CMD_DISC, 6) // we want to disconnect from nbd server
await this.#write(buffer)
await this.#serverSocket.destroy()
buffer.writeBigUInt64BE(queryId, 8)
buffer.writeBigUInt64BE(0n, 16)
buffer.writeInt32BE(0, 24)
const promise = pFromCallback(cb => {
this.#serverSocket.end(buffer, 'utf8', cb)
})
try {
await pTimeout.call(promise, this.#messageTimeout)
} catch (error) {
this.#serverSocket.destroy()
}
this.#serverSocket = undefined
this.#connected = false
}
@@ -195,11 +217,13 @@ module.exports = class NbdClient {
}
#read(length) {
return readChunkStrict(this.#serverSocket, length)
const promise = readChunkStrict(this.#serverSocket, length)
return pTimeout.call(promise, this.#messageTimeout)
}
#write(buffer) {
return fromCallback.call(this.#serverSocket, 'write', buffer)
const promise = fromCallback.call(this.#serverSocket, 'write', buffer)
return pTimeout.call(promise, this.#messageTimeout)
}
async #readInt32() {
@@ -232,19 +256,20 @@ module.exports = class NbdClient {
}
try {
this.#waitingForResponse = true
const magic = await this.#readInt32()
const buffer = await this.#read(16)
const magic = buffer.readInt32BE(0)
if (magic !== NBD_REPLY_MAGIC) {
throw new Error(`magic number for block answer is wrong : ${magic} ${NBD_REPLY_MAGIC}`)
}
const error = await this.#readInt32()
const error = buffer.readInt32BE(4)
if (error !== 0) {
// @todo use error code from constants.mjs
throw new Error(`GOT ERROR CODE : ${error}`)
}
const blockQueryId = await this.#readInt64()
const blockQueryId = buffer.readBigUInt64BE(8)
const query = this.#commandQueryBacklog.get(blockQueryId)
if (!query) {
throw new Error(` no query associated with id ${blockQueryId}`)
@@ -281,7 +306,13 @@ module.exports = class NbdClient {
buffer.writeInt16BE(NBD_CMD_READ, 6) // we want to read a data block
buffer.writeBigUInt64BE(queryId, 8)
// byte offset in the raw disk
buffer.writeBigUInt64BE(BigInt(index) * BigInt(size), 16)
const offset = BigInt(index) * BigInt(size)
const remaining = this.#exportSize - offset
if (remaining < BigInt(size)) {
size = Number(remaining)
}
buffer.writeBigUInt64BE(offset, 16)
buffer.writeInt32BE(size, 24)
return new Promise((resolve, reject) => {
@@ -307,11 +338,12 @@ module.exports = class NbdClient {
})
}
async *readBlocks(indexGenerator) {
async *readBlocks(indexGenerator = 2 * 1024 * 1024) {
// default : read all blocks
if (indexGenerator === undefined) {
const exportSize = this.#exportSize
const chunkSize = 2 * 1024 * 1024
if (typeof indexGenerator === 'number') {
const exportSize = Number(this.#exportSize)
const chunkSize = indexGenerator
indexGenerator = function* () {
const nbBlocks = Math.ceil(exportSize / chunkSize)
for (let index = 0; index < nbBlocks; index++) {
@@ -348,4 +380,15 @@ module.exports = class NbdClient {
yield readAhead.shift()
}
}
stream(chunkSize) {
async function* iterator() {
for await (const chunk of this.readBlocks(chunkSize)) {
yield chunk
}
}
// create a readable stream instead of returning the iterator
// since iterators don't like unshift and partial reading
return Readable.from(iterator())
}
}

View File

@@ -1,76 +0,0 @@
'use strict'
const NbdClient = require('./index.js')
const { spawn } = require('node:child_process')
const fs = require('node:fs/promises')
const { test } = require('tap')
const tmp = require('tmp')
const { pFromCallback } = require('promise-toolbox')
const { asyncEach } = require('@vates/async-each')
const FILE_SIZE = 2 * 1024 * 1024
async function createTempFile(size) {
const tmpPath = await pFromCallback(cb => tmp.file(cb))
const data = Buffer.alloc(size, 0)
for (let i = 0; i < size; i += 4) {
data.writeUInt32BE(i, i)
}
await fs.writeFile(tmpPath, data)
return tmpPath
}
test('it works with unsecured network', async tap => {
const path = await createTempFile(FILE_SIZE)
const nbdServer = spawn(
'nbdkit',
[
'file',
path,
'--newstyle', //
'--exit-with-parent',
'--read-only',
'--export-name=MY_SECRET_EXPORT',
],
{
stdio: ['inherit', 'inherit', 'inherit'],
}
)
const client = new NbdClient({
address: 'localhost',
exportname: 'MY_SECRET_EXPORT',
secure: false,
})
await client.connect()
tap.equal(client.exportSize, BigInt(FILE_SIZE))
const CHUNK_SIZE = 128 * 1024 // non default size
const indexes = []
for (let i = 0; i < FILE_SIZE / CHUNK_SIZE; i++) {
indexes.push(i)
}
// read mutiple blocks in parallel
await asyncEach(
indexes,
async i => {
const block = await client.readBlock(i, CHUNK_SIZE)
let blockOk = true
let firstFail
for (let j = 0; j < CHUNK_SIZE; j += 4) {
const wanted = i * CHUNK_SIZE + j
const found = block.readUInt32BE(j)
blockOk = blockOk && found === wanted
if (!blockOk && firstFail === undefined) {
firstFail = j
}
}
tap.ok(blockOk, `check block ${i} content`)
},
{ concurrency: 8 }
)
await client.disconnect()
nbdServer.kill()
await fs.unlink(path)
})

View File

@@ -13,17 +13,18 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "1.2.0",
"version": "2.0.0",
"engines": {
"node": ">=14.0"
},
"main": "./index.mjs",
"dependencies": {
"@vates/async-each": "^1.0.0",
"@vates/read-chunk": "^1.1.1",
"@vates/read-chunk": "^1.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"promise-toolbox": "^0.21.0",
"xen-api": "^1.3.1"
"xen-api": "^1.3.6"
},
"devDependencies": {
"tap": "^16.3.0",
@@ -31,6 +32,6 @@
},
"scripts": {
"postversion": "npm publish --access public",
"test-integration": "tap --lines 70 --functions 36 --branches 54 --statements 69 *.integ.js"
"test-integration": "tap --lines 97 --functions 95 --branches 74 --statements 97 tests/*.integ.mjs"
}
}

View File

@@ -0,0 +1,182 @@
Public Key Info:
Public Key Algorithm: RSA
Key Security Level: High (3072 bits)
modulus:
00:be:92:be:df:de:0a:ab:38:fc:1a:c0:1a:58:4d:86
b8:1f:25:10:7d:19:05:17:bf:02:3d:e9:ef:f8:c0:04
5d:6f:98:de:5c:dd:c3:0f:e2:61:61:e4:b5:9c:42:ac
3e:af:fd:30:10:e1:54:32:66:75:f6:80:90:85:05:a0
6a:14:a2:6f:a7:2e:f0:f3:52:94:2a:f2:34:fc:0d:b4
fb:28:5d:1c:11:5c:59:6e:63:34:ba:b3:fd:73:b1:48
35:00:84:53:da:6a:9b:84:ab:64:b1:a1:2b:3a:d1:5a
d7:13:7c:12:2a:4e:72:e9:96:d6:30:74:c5:71:05:14
4b:2d:01:94:23:67:4e:37:3c:1e:c1:a0:bc:34:04:25
21:11:fb:4b:6b:53:74:8f:90:93:57:af:7f:3b:78:d6
a4:87:fe:7d:ed:20:11:8b:70:54:67:b8:c9:f5:c0:6b
de:4e:e7:a5:79:ff:f7:ad:cf:10:57:f5:51:70:7b:54
68:28:9e:b9:c2:10:7b:ab:aa:11:47:9f:ec:e6:2f:09
44:4a:88:5b:dd:8c:10:b4:c4:03:25:06:d9:e0:9f:a0
0d:cf:94:4b:3b:fa:a5:17:2c:e4:67:c4:17:6a:ab:d8
c8:7a:16:41:b9:91:b7:9c:ae:8c:94:be:26:61:51:71
c1:a6:39:39:97:75:28:a9:0e:21:ea:f0:bd:71:4a:8c
e1:f8:1d:a9:22:2f:10:a8:1b:e5:a4:9a:fd:0f:fa:c6
20:bc:96:99:79:c6:ba:a4:1f:3e:d4:91:c5:af:bb:71
0a:5a:ef:69:9c:64:69:ce:5a:fe:3f:c2:24:f4:26:d4
3d:ab:ab:9a:f0:f6:f1:b1:64:a9:f4:e2:34:6a:ab:2e
95:47:b9:07:5a:39:c6:95:9c:a9:e8:ed:71:dd:c1:21
16:c8:2d:4c:2c:af:06:9d:c6:fa:fe:c5:2a:6c:b4:c3
d5:96:fc:5e:fd:ec:1c:30:b4:9d:cb:29:ef:a8:50:1c
21:
public exponent:
01:00:01:
private exponent:
25:37:c5:7d:35:01:02:65:73:9e:c9:cb:9b:59:30:a9
3e:b3:df:5f:7f:06:66:97:d0:19:45:59:af:4b:d8:ce
62:a0:09:35:3b:bd:ff:99:27:89:95:bf:fe:0f:6b:52
26:ce:9c:97:7f:5a:11:29:bf:79:ef:ab:c9:be:ca:90
4d:0d:58:1e:df:65:01:30:2c:6d:a2:b5:c4:4f:ec:fb
6b:eb:9b:32:ac:c5:6e:70:83:78:be:f4:0d:a7:1e:c1
f3:22:e4:b9:70:3e:85:0f:6f:ef:dc:d8:f3:78:b5:73
f1:83:36:8c:fa:9b:28:91:63:ad:3c:f0:de:5c:ae:94
eb:ea:36:03:20:06:bf:74:c7:50:eb:52:36:1a:65:21
eb:40:17:7f:93:61:dd:33:d0:02:bc:ec:6d:31:f1:41
5a:a9:d1:f0:00:66:4c:c4:18:47:d5:67:e3:cd:bb:83
44:07:ab:62:83:21:dc:d8:e6:89:37:08:bb:9d:ea:62
c2:5d:ce:85:c2:dc:48:27:0c:a4:23:61:b7:30:e7:26
44:dc:1e:5c:2e:16:35:2b:2e:a6:e6:a4:ce:1f:9b:e9
fe:96:fa:49:1d:fb:2a:df:bc:bf:46:da:52:f8:37:8a
84:ab:e4:73:e6:46:56:b5:b4:3d:e1:63:eb:02:8e:d7
67:96:c4:dc:28:6d:6b:b6:0c:a3:0b:db:87:29:ad:f9
ec:73:b6:55:a3:40:32:13:84:c7:2f:33:74:04:dc:42
00:11:9c:fb:fc:62:35:b3:82:c3:3c:28:80:e8:09:a8
97:c7:c1:2e:3d:27:fa:4f:9b:fc:c2:34:58:41:5c:a1
e2:70:2e:2f:82:ad:bd:bd:8e:dd:23:12:25:de:89:70
60:75:48:90:80:ac:55:74:51:6f:49:9e:7f:63:41:8b
3c:b1:f5:c3:6b:4b:5a:50:a6:4d:38:e8:82:c2:04:c8
30:fd:06:9b:c1:04:27:b6:63:3a:5e:f5:4d:00:c3:d1
prime1:
00:f6:00:2e:7d:89:61:24:16:5e:87:ca:18:6c:03:b8
b4:33:df:4a:a7:7f:db:ed:39:15:41:12:61:4f:4e:b4
de:ab:29:d9:0c:6c:01:7e:53:2e:ee:e7:5f:a2:e4:6d
c6:4b:07:4e:d8:a3:ae:45:06:97:bd:18:a3:e9:dd:29
54:64:6d:f0:af:08:95:ae:ae:3e:71:63:76:2a:a1:18
c4:b1:fc:bc:3d:42:15:74:b3:c5:38:1f:5d:92:f1:b2
c6:3f:10:fe:35:1a:c6:b1:ce:70:38:ff:08:5c:de:61
79:c7:50:91:22:4d:e9:c8:18:49:e2:5c:91:84:86:e2
4d:0f:6e:9b:0d:81:df:aa:f3:59:75:56:e9:33:18:dd
ab:39:da:e2:25:01:05:a1:6e:23:59:15:2c:89:35:c7
ae:9c:c7:ea:88:9a:1a:f3:48:07:11:82:59:79:8c:62
53:06:37:30:14:b3:82:b1:50:fc:ae:b8:f7:1c:57:44
7d:
prime2:
00:c6:51:cc:dc:88:2e:cf:98:90:10:19:e0:d3:a4:d1
3f:dc:b0:29:d3:bb:26:ee:eb:00:17:17:d1:d1:bb:9b
34:b1:4e:af:b5:6c:1c:54:53:b4:bb:55:da:f7:78:cd
38:b4:2e:3a:8c:63:80:3b:64:9c:b4:2b:cd:dd:50:0b
05:d2:00:7a:df:8e:c3:e6:29:e0:9c:d8:40:b7:11:09
f4:38:df:f6:ed:93:1e:18:d4:93:fa:8d:ee:82:9c:0f
c1:88:26:84:9d:4f:ae:8a:17:d5:55:54:4c:c6:0a:ac
4d:ec:33:51:68:0f:4b:92:2e:04:57:fe:15:f5:00:46
5c:8e:ad:09:2c:e7:df:d5:36:7a:4e:bd:da:21:22:d7
58:b4:72:93:94:af:34:cc:e2:b8:d0:4f:0b:5d:97:08
12:19:17:34:c5:15:49:00:48:56:13:b8:45:4e:3b:f8
bc:d5:ab:d9:6d:c2:4a:cc:01:1a:53:4d:46:50:49:3b
75:
coefficient:
63:67:50:29:10:6a:85:a3:dc:51:90:20:76:86:8c:83
8e:d5:ff:aa:75:fd:b5:f8:31:b0:96:6c:18:1d:5b:ed
a4:2e:47:8d:9c:c2:1e:2c:a8:6d:4b:10:a5:c2:53:46
8a:9a:84:91:d7:fc:f5:cc:03:ce:b9:3d:5c:01:d2:27
99:7b:79:89:4f:a1:12:e3:05:5d:ee:10:f6:8c:e6:ce
5e:da:32:56:6d:6f:eb:32:b4:75:7b:94:49:d8:2d:9e
4d:19:59:2e:e4:0b:bc:95:df:df:65:67:a1:dd:c6:2b
99:f4:76:e8:9f:fa:57:1d:ca:f9:58:a9:ce:9b:30:5c
42:8a:ba:05:e7:e2:15:45:25:bc:e9:68:c1:8b:1a:37
cc:e1:aa:45:2e:94:f5:81:47:1e:64:7f:c0:c1:b7:a8
21:58:18:a9:a0:ed:e0:27:75:bf:65:81:6b:e4:1d:5a
b7:7e:df:d8:28:c6:36:21:19:c8:6e:da:ca:9e:da:84
exp1:
00:ba:d7:fe:77:a9:0d:98:2c:49:56:57:c0:5e:e2:20
ba:f6:1f:26:03:bc:d0:5d:08:9b:45:16:61:c4:ab:e2
22:b1:dc:92:17:a6:3d:28:26:a4:22:1e:a8:7b:ff:86
05:33:5d:74:9c:85:0d:cb:2d:ab:b8:9b:6b:7c:28:57
c8:da:92:ca:59:17:6b:21:07:05:34:78:37:fb:3e:ea
a2:13:12:04:23:7e:fa:ee:ed:cf:e0:c5:a9:fb:ff:0a
2b:1b:21:9c:02:d7:b8:8c:ba:60:70:59:fc:8f:14:f4
f2:5a:d9:ad:b2:61:7d:2c:56:8e:5f:98:b1:89:f8:2d
10:1c:a5:84:ad:28:b4:aa:92:34:a3:34:04:e1:a3:84
52:16:1a:52:e3:8a:38:2d:99:8a:cd:91:90:87:12:ca
fc:ab:e6:08:14:03:00:6f:41:88:e4:da:9d:7c:fd:8c
7c:c4:de:cb:ed:1d:3f:29:d0:7a:6b:76:df:71:ae:32
bd:
exp2:
4a:e9:d3:6c:ea:b4:64:0e:c9:3c:8b:c9:f5:a8:a8:b2
6a:f6:d0:95:fe:78:32:7f:ea:c4:ce:66:9f:c7:32:55
b1:34:7c:03:18:17:8b:73:23:2e:30:bc:4a:07:03:de
8b:91:7a:e4:55:21:b7:4d:c6:33:f8:e8:06:d5:99:94
55:43:81:26:b9:93:1e:7a:6b:32:54:2d:fd:f9:1d:bd
77:4e:82:c4:33:72:87:06:a5:ef:5b:75:e1:38:7a:6b
2c:b7:00:19:3c:64:3e:1d:ca:a4:34:f7:db:47:64:d6
fa:86:58:15:ea:d1:2d:22:dc:d9:30:4d:b3:02:ab:91
83:03:b2:17:98:6f:60:e6:f7:44:8f:4a:ba:81:a2:bf
0b:4a:cc:9c:b9:a2:44:52:d0:65:3f:b6:97:5f:d9:d8
9c:49:bb:d1:46:bd:10:b2:42:71:a8:85:e5:8b:99:e6
1b:00:93:5d:76:ab:32:6c:a8:39:17:53:9c:38:4d:91
Public Key PIN:
pin-sha256:ISh/UeFjUG5Gwrpx6hMUGQPvg9wOKjOkHmRbs4YjZqs=
Public Key ID:
sha256:21287f51e163506e46c2ba71ea13141903ef83dc0e2a33a41e645bb3862366ab
sha1:1a48455111ac45fb5807c5cdb7b20b896c52f0b6
-----BEGIN RSA PRIVATE KEY-----
MIIG4wIBAAKCAYEAvpK+394Kqzj8GsAaWE2GuB8lEH0ZBRe/Aj3p7/jABF1vmN5c
3cMP4mFh5LWcQqw+r/0wEOFUMmZ19oCQhQWgahSib6cu8PNSlCryNPwNtPsoXRwR
XFluYzS6s/1zsUg1AIRT2mqbhKtksaErOtFa1xN8EipOcumW1jB0xXEFFEstAZQj
Z043PB7BoLw0BCUhEftLa1N0j5CTV69/O3jWpIf+fe0gEYtwVGe4yfXAa95O56V5
//etzxBX9VFwe1RoKJ65whB7q6oRR5/s5i8JREqIW92MELTEAyUG2eCfoA3PlEs7
+qUXLORnxBdqq9jIehZBuZG3nK6MlL4mYVFxwaY5OZd1KKkOIerwvXFKjOH4Haki
LxCoG+Wkmv0P+sYgvJaZeca6pB8+1JHFr7txClrvaZxkac5a/j/CJPQm1D2rq5rw
9vGxZKn04jRqqy6VR7kHWjnGlZyp6O1x3cEhFsgtTCyvBp3G+v7FKmy0w9WW/F79
7BwwtJ3LKe+oUBwhAgMBAAECggGAJTfFfTUBAmVznsnLm1kwqT6z319/BmaX0BlF
Wa9L2M5ioAk1O73/mSeJlb/+D2tSJs6cl39aESm/ee+ryb7KkE0NWB7fZQEwLG2i
tcRP7Ptr65syrMVucIN4vvQNpx7B8yLkuXA+hQ9v79zY83i1c/GDNoz6myiRY608
8N5crpTr6jYDIAa/dMdQ61I2GmUh60AXf5Nh3TPQArzsbTHxQVqp0fAAZkzEGEfV
Z+PNu4NEB6tigyHc2OaJNwi7nepiwl3OhcLcSCcMpCNhtzDnJkTcHlwuFjUrLqbm
pM4fm+n+lvpJHfsq37y/RtpS+DeKhKvkc+ZGVrW0PeFj6wKO12eWxNwobWu2DKML
24cprfnsc7ZVo0AyE4THLzN0BNxCABGc+/xiNbOCwzwogOgJqJfHwS49J/pPm/zC
NFhBXKHicC4vgq29vY7dIxIl3olwYHVIkICsVXRRb0mef2NBizyx9cNrS1pQpk04
6ILCBMgw/QabwQQntmM6XvVNAMPRAoHBAPYALn2JYSQWXofKGGwDuLQz30qnf9vt
ORVBEmFPTrTeqynZDGwBflMu7udfouRtxksHTtijrkUGl70Yo+ndKVRkbfCvCJWu
rj5xY3YqoRjEsfy8PUIVdLPFOB9dkvGyxj8Q/jUaxrHOcDj/CFzeYXnHUJEiTenI
GEniXJGEhuJND26bDYHfqvNZdVbpMxjdqzna4iUBBaFuI1kVLIk1x66cx+qImhrz
SAcRgll5jGJTBjcwFLOCsVD8rrj3HFdEfQKBwQDGUczciC7PmJAQGeDTpNE/3LAp
07sm7usAFxfR0bubNLFOr7VsHFRTtLtV2vd4zTi0LjqMY4A7ZJy0K83dUAsF0gB6
347D5ingnNhAtxEJ9Djf9u2THhjUk/qN7oKcD8GIJoSdT66KF9VVVEzGCqxN7DNR
aA9Lki4EV/4V9QBGXI6tCSzn39U2ek692iEi11i0cpOUrzTM4rjQTwtdlwgSGRc0
xRVJAEhWE7hFTjv4vNWr2W3CSswBGlNNRlBJO3UCgcEAutf+d6kNmCxJVlfAXuIg
uvYfJgO80F0Im0UWYcSr4iKx3JIXpj0oJqQiHqh7/4YFM110nIUNyy2ruJtrfChX
yNqSylkXayEHBTR4N/s+6qITEgQjfvru7c/gxan7/worGyGcAte4jLpgcFn8jxT0
8lrZrbJhfSxWjl+YsYn4LRAcpYStKLSqkjSjNATho4RSFhpS44o4LZmKzZGQhxLK
/KvmCBQDAG9BiOTanXz9jHzE3svtHT8p0Hprdt9xrjK9AoHASunTbOq0ZA7JPIvJ
9aiosmr20JX+eDJ/6sTOZp/HMlWxNHwDGBeLcyMuMLxKBwPei5F65FUht03GM/jo
BtWZlFVDgSa5kx56azJULf35Hb13ToLEM3KHBqXvW3XhOHprLLcAGTxkPh3KpDT3
20dk1vqGWBXq0S0i3NkwTbMCq5GDA7IXmG9g5vdEj0q6gaK/C0rMnLmiRFLQZT+2
l1/Z2JxJu9FGvRCyQnGoheWLmeYbAJNddqsybKg5F1OcOE2RAoHAY2dQKRBqhaPc
UZAgdoaMg47V/6p1/bX4MbCWbBgdW+2kLkeNnMIeLKhtSxClwlNGipqEkdf89cwD
zrk9XAHSJ5l7eYlPoRLjBV3uEPaM5s5e2jJWbW/rMrR1e5RJ2C2eTRlZLuQLvJXf
32Vnod3GK5n0duif+lcdyvlYqc6bMFxCiroF5+IVRSW86WjBixo3zOGqRS6U9YFH
HmR/wMG3qCFYGKmg7eAndb9lgWvkHVq3ft/YKMY2IRnIbtrKntqE
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,168 @@
import NbdClient from '../index.mjs'
import { spawn, exec } from 'node:child_process'
import fs from 'node:fs/promises'
import { test } from 'tap'
import tmp from 'tmp'
import { pFromCallback } from 'promise-toolbox'
import { Socket } from 'node:net'
import { NBD_DEFAULT_PORT } from '../constants.mjs'
import assert from 'node:assert'
const FILE_SIZE = 10 * 1024 * 1024
async function createTempFile(size) {
const tmpPath = await pFromCallback(cb => tmp.file(cb))
const data = Buffer.alloc(size, 0)
for (let i = 0; i < size; i += 4) {
data.writeUInt32BE(i, i)
}
await fs.writeFile(tmpPath, data)
return tmpPath
}
async function spawnNbdKit(path) {
let tries = 5
// wait for server to be ready
const nbdServer = spawn(
'nbdkit',
[
'file',
path,
'--newstyle', //
'--exit-with-parent',
'--read-only',
'--export-name=MY_SECRET_EXPORT',
'--tls=on',
'--tls-certificates=./tests/',
// '--tls-verify-peer',
// '--verbose',
'--exit-with-parent',
],
{
stdio: ['inherit', 'inherit', 'inherit'],
}
)
nbdServer.on('error', err => {
console.error(err)
})
do {
try {
const socket = new Socket()
await new Promise((resolve, reject) => {
socket.connect(NBD_DEFAULT_PORT, 'localhost')
socket.once('error', reject)
socket.once('connect', resolve)
})
socket.destroy()
break
} catch (err) {
tries--
if (tries <= 0) {
throw err
} else {
await new Promise(resolve => setTimeout(resolve, 1000))
}
}
} while (true)
return nbdServer
}
async function killNbdKit() {
return new Promise((resolve, reject) =>
exec('pkill -9 -f -o nbdkit', err => {
err ? reject(err) : resolve()
})
)
}
test('it works with unsecured network', async tap => {
const path = await createTempFile(FILE_SIZE)
let nbdServer = await spawnNbdKit(path)
const client = new NbdClient(
{
address: '127.0.0.1',
exportname: 'MY_SECRET_EXPORT',
cert: `-----BEGIN CERTIFICATE-----
MIIDazCCAlOgAwIBAgIUeHpQ0IeD6BmP2zgsv3LV3J4BI/EwDQYJKoZIhvcNAQEL
BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMzA1MTcxMzU1MzBaFw0yNDA1
MTYxMzU1MzBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC/8wLopj/iZY6ijmpvgCJsl+zY0hQZQcIoaCs0H75u
8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZolevaSJLNT2Iolscvc2W9NCF4N1V6y
zs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh67u+uI40732AfQqD01BNCTD/uHRB
lKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y2SJVTeT4a1sSJixl6I1YPmt80FJh
gq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULwdJOGgmqGRDzgZKJS5UUpxe/ViEO4
59I18vIkgibaRYhENgmnP3lIzTOLlUe07tbSML5RGBbBAgMBAAGjUzBRMB0GA1Ud
DgQWBBR/8+zYoL0H0LdWfULHg1LynFdSbzAfBgNVHSMEGDAWgBR/8+zYoL0H0LdW
fULHg1LynFdSbzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBD
OF5bTmbDEGoZ6OuQaI0vyya/T4FeaoWmh22gLeL6dEEmUVGJ1NyMTOvG9GiGJ8OM
QhD1uHJei45/bXOYIDGey2+LwLWye7T4vtRFhf8amYh0ReyP/NV4/JoR/U3pTSH6
tns7GZ4YWdwUhvOOlm17EQKVO/hP3t9mp74gcjdL4bCe5MYSheKuNACAakC1OR0U
ZakJMP9ijvQuq8spfCzrK+NbHKNHR9tEgQw+ax/t1Au4dGVtFbcoxqCrx2kTl0RP
CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
57ABhfYpMlZ4aHjuN1bL
-----END CERTIFICATE-----
`,
},
{
readAhead: 2,
}
)
await client.connect()
tap.equal(client.exportSize, BigInt(FILE_SIZE))
const CHUNK_SIZE = 1024 * 1024 // non default size
const indexes = []
for (let i = 0; i < FILE_SIZE / CHUNK_SIZE; i++) {
indexes.push(i)
}
const nbdIterator = client.readBlocks(function* () {
for (const index of indexes) {
yield { index, size: CHUNK_SIZE }
}
})
let i = 0
for await (const block of nbdIterator) {
let blockOk = true
let firstFail
for (let j = 0; j < CHUNK_SIZE; j += 4) {
const wanted = i * CHUNK_SIZE + j
const found = block.readUInt32BE(j)
blockOk = blockOk && found === wanted
if (!blockOk && firstFail === undefined) {
firstFail = j
}
}
tap.ok(blockOk, `check block ${i} content`)
i++
// flaky server is flaky
if (i % 7 === 0) {
// kill the older nbdkit process
await killNbdKit()
nbdServer = await spawnNbdKit(path)
}
}
// we can reuse the conneciton to read other blocks
// default iterator
const nbdIteratorWithDefaultBlockIterator = client.readBlocks()
let nb = 0
for await (const block of nbdIteratorWithDefaultBlockIterator) {
nb++
tap.equal(block.length, 2 * 1024 * 1024)
}
tap.equal(nb, 5)
assert.rejects(() => client.readBlock(100, CHUNK_SIZE))
await client.disconnect()
// double disconnection shouldn't pose any problem
await client.disconnect()
nbdServer.kill()
await fs.unlink(path)
})

View File

@@ -0,0 +1,21 @@
-----BEGIN CERTIFICATE-----
MIIDazCCAlOgAwIBAgIUeHpQ0IeD6BmP2zgsv3LV3J4BI/EwDQYJKoZIhvcNAQEL
BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMzA1MTcxMzU1MzBaFw0yNDA1
MTYxMzU1MzBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC/8wLopj/iZY6ijmpvgCJsl+zY0hQZQcIoaCs0H75u
8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZolevaSJLNT2Iolscvc2W9NCF4N1V6y
zs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh67u+uI40732AfQqD01BNCTD/uHRB
lKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y2SJVTeT4a1sSJixl6I1YPmt80FJh
gq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULwdJOGgmqGRDzgZKJS5UUpxe/ViEO4
59I18vIkgibaRYhENgmnP3lIzTOLlUe07tbSML5RGBbBAgMBAAGjUzBRMB0GA1Ud
DgQWBBR/8+zYoL0H0LdWfULHg1LynFdSbzAfBgNVHSMEGDAWgBR/8+zYoL0H0LdW
fULHg1LynFdSbzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBD
OF5bTmbDEGoZ6OuQaI0vyya/T4FeaoWmh22gLeL6dEEmUVGJ1NyMTOvG9GiGJ8OM
QhD1uHJei45/bXOYIDGey2+LwLWye7T4vtRFhf8amYh0ReyP/NV4/JoR/U3pTSH6
tns7GZ4YWdwUhvOOlm17EQKVO/hP3t9mp74gcjdL4bCe5MYSheKuNACAakC1OR0U
ZakJMP9ijvQuq8spfCzrK+NbHKNHR9tEgQw+ax/t1Au4dGVtFbcoxqCrx2kTl0RP
CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
57ABhfYpMlZ4aHjuN1bL
-----END CERTIFICATE-----

View File

@@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC/8wLopj/iZY6i
jmpvgCJsl+zY0hQZQcIoaCs0H75u8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZol
evaSJLNT2Iolscvc2W9NCF4N1V6yzs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh
67u+uI40732AfQqD01BNCTD/uHRBlKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y
2SJVTeT4a1sSJixl6I1YPmt80FJhgq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULw
dJOGgmqGRDzgZKJS5UUpxe/ViEO459I18vIkgibaRYhENgmnP3lIzTOLlUe07tbS
ML5RGBbBAgMBAAECggEATLYiafcTHfgnZmjTOad0WoDnC4n9tVBV948WARlUooLS
duL3RQRHCLz9/ZaTuFA1XDpNcYyc/B/IZoU7aJGZR3+JSmJBjowpUphu+klVNNG4
i6lDRrzYlUI0hfdLjHsDTDBIKi91KcB0lix/VkvsrVQvDHwsiR2ZAIiVWAWQFKrR
5O3DhSTHbqyq47uR58rWr4Zf3zvZaUl841AS1yELzCiZqz7AenvyWphim0c0XA5d
I63CEShntHnEAA9OMcP8+BNf/3AmqB4welY+m8elB3aJNH+j7DKq/AWqaM5nl2PC
cS6qgpxwOyTxEOyj1xhwK5ZMRR3heW3NfutIxSOPlwKBgQDB9ZkrBeeGVtCISO7C
eCANzSLpeVrahTvaCSQLdPHsLRLDUc+5mxdpi3CaRlzYs3S1OWdAtyWX9mBryltF
qDPhCNjFDyHok4D3wLEWdS9oUVwEKUM8fOPW3tXLLiMM7p4862Qo7LqnqHzPqsnz
22iZo5yjcc7aLJ+VmFrbAowwOwKBgQD9WNCvczTd7Ymn7zEvdiAyNoS0OZ0orwEJ
zGaxtjqVguGklNfrb/UB+eKNGE80+YnMiSaFc9IQPetLntZdV0L7kWYdCI8kGDNA
DbVRCOp+z8DwAojlrb/zsYu23anQozT3WeHxVU66lNuyEQvSW2tJa8gN1htrD7uY
5KLibYrBMwKBgEM0iiHyJcrSgeb2/mO7o7+keJhVSDm3OInP6QFfQAQJihrLWiKB
rpcPjbCm+LzNUX8JqNEvpIMHB1nR/9Ye9frfSdzd5W3kzicKSVHywL5wkmWOtpFa
5Mcq5wFDtzlf5MxO86GKhRJauwRptRgdyhySKFApuva1x4XaCIEiXNjJAoGBAN82
t3c+HCBEv3o05rMYcrmLC1T3Rh6oQlPtwbVmByvfywsFEVCgrc/16MPD3VWhXuXV
GRmPuE8THxLbead30M5xhvShq+xzXgRbj5s8Lc9ZIHbW5OLoOS1vCtgtaQcoJOyi
Rs4pCVqe+QpktnO6lEZ2Libys+maTQEiwNibBxu9AoGAUG1V5aKMoXa7pmGeuFR6
ES+1NDiCt6yDq9BsLZ+e2uqvWTkvTGLLwvH6xf9a0pnnILd0AUTKAAaoUdZS6++E
cGob7fxMwEE+UETp0QBgLtfjtExMOFwr2avw8PV4CYEUkPUAm2OFB2Twh+d/PNfr
FAxF1rN47SBPNbFI8N4TFsg=
-----END PRIVATE KEY-----

View File

@@ -0,0 +1 @@
../../scripts/npmignore

View File

@@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 reedog117
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,127 @@
forked from https://github.com/reedog117/node-vsphere-soap
# node-vsphere-soap
[![Join the chat at https://gitter.im/reedog117/node-vsphere-soap](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/reedog117/node-vsphere-soap?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
This is a Node.js module to connect to VMware vCenter servers and/or ESXi hosts and perform operations using the [vSphere Web Services API]. If you're feeling really adventurous, you can use this module to port vSphere operations from other languages (such as the Perl, Python, and Go libraries that exist) and have fully native Node.js code controlling your VMware virtual infrastructure!
This is very much in alpha.
## Authors
- Patrick C - [@reedog117]
## Version
0.0.2-5
## Installation
```sh
$ npm install node-vsphere-soap --save
```
## Sample Code
### To connect to a vCenter server:
var nvs = require('node-vsphere-soap');
var vc = new nvs.Client(host, user, password, sslVerify);
vc.once('ready', function() {
// perform work here
});
vc.once('error', function(err) {
// handle error here
});
#### Arguments
- host = hostname or IP of vCenter/ESX/ESXi server
- user = username
- password = password
- sslVerify = true|false - set to false if you have self-signed/unverified certificates
#### Events
- ready = emits when session authenticated with server
- error = emits when there's an error
- _err_ contains the error
#### Client instance variables
- serviceContent - ServiceContent object retrieved by RetrieveServiceContent API call
- userName - username of authenticated user
- fullName - full name of authenticated user
### To run a command:
var vcCmd = vc.runCommand( commandToRun, arguments );
vcCmd.once('result', function( result, raw, soapHeader) {
// handle results
});
vcCmd.once('error', function( err) {
// handle errors
});
#### Arguments
- commandToRun = Method from the vSphere API
- arguments = JSON document containing arguments to send
#### Events
- result = emits when session authenticated with server
- _result_ contains the JSON-formatted result from the server
- _raw_ contains the raw SOAP XML response from the server
- _soapHeader_ contains any soapHeaders from the server
- error = emits when there's an error
- _err_ contains the error
Make sure you check out tests/vsphere-soap.test.js for examples on how to create commands to run
## Development
node-vsphere-soap uses a number of open source projects to work properly:
- [node.js] - evented I/O for the backend
- [node-soap] - SOAP client for Node.js
- [soap-cookie] - cookie authentication for the node-soap module
- [lodash] - for quickly manipulating JSON
- [lab] - testing engine
- [code] - assertion engine used with lab
Want to contribute? Great!
### Todo's
- Write More Tests
- Create Travis CI test harness with a fake vCenter Instance
- Add Code Comments
### Testing
I have been testing on a Mac with node v0.10.36 and both ESXi and vCenter 5.5.
To edit tests, edit the file **test/vsphere-soap.test.js**
To point the module at your own vCenter/ESXi host, edit **config-test.stub.js** and save it as **config-test.js**
To run test scripts:
```sh
$ npm test
```
## License
MIT
[vSphere Web Services API]: http://pubs.vmware.com/vsphere-55/topic/com.vmware.wssdk.apiref.doc/right-pane.html
[node-soap]: https://github.com/vpulim/node-soap
[node.js]: http://nodejs.org/
[soap-cookie]: https://github.com/shanestillwell/soap-cookie
[code]: https://github.com/hapijs/code
[lab]: https://github.com/hapijs/lab
[lodash]: https://lodash.com/
[@reedog117]: http://www.twitter.com/reedog117

View File

@@ -0,0 +1,230 @@
/*
node-vsphere-soap
client.js
This file creates the Client class
- when the class is instantiated, a connection will be made to the ESXi/vCenter server to verify that the creds are good
- upon a bad login, the connnection will be terminated
*/
import { EventEmitter } from 'events'
import axios from 'axios'
import https from 'node:https'
import util from 'util'
import soap from 'soap'
import Cookie from 'soap-cookie' // required for session persistence
// Client class
// inherits from EventEmitter
// possible events: connect, error, ready
export function Client(vCenterHostname, username, password, sslVerify) {
this.status = 'disconnected'
this.reconnectCount = 0
sslVerify = typeof sslVerify !== 'undefined' ? sslVerify : false
EventEmitter.call(this)
// sslVerify argument handling
if (sslVerify) {
this.clientopts = {}
} else {
this.clientopts = {
request: axios.create({
httpsAgent: new https.Agent({
rejectUnauthorized: false,
}),
}),
}
}
this.connectionInfo = {
host: vCenterHostname,
user: username,
password,
sslVerify,
}
this._loginArgs = {
userName: this.connectionInfo.user,
password: this.connectionInfo.password,
}
this._vcUrl = 'https://' + this.connectionInfo.host + '/sdk/vimService.wsdl'
// connect to the vCenter / ESXi host
this.on('connect', this._connect)
this.emit('connect')
// close session
this.on('close', this._close)
return this
}
util.inherits(Client, EventEmitter)
Client.prototype.runCommand = function (command, args) {
const self = this
let cmdargs
if (!args || args === null) {
cmdargs = {}
} else {
cmdargs = args
}
const emitter = new EventEmitter()
// check if client has successfully connected
if (self.status === 'ready' || self.status === 'connecting') {
self.client.VimService.VimPort[command](cmdargs, function (err, result, raw, soapHeader) {
if (err) {
_soapErrorHandler(self, emitter, command, cmdargs, err)
}
if (command === 'Logout') {
self.status = 'disconnected'
process.removeAllListeners('beforeExit')
}
emitter.emit('result', result, raw, soapHeader)
})
} else {
// if connection not ready or connecting, reconnect to instance
if (self.status === 'disconnected') {
self.emit('connect')
}
self.once('ready', function () {
self.client.VimService.VimPort[command](cmdargs, function (err, result, raw, soapHeader) {
if (err) {
_soapErrorHandler(self, emitter, command, cmdargs, err)
}
if (command === 'Logout') {
self.status = 'disconnected'
process.removeAllListeners('beforeExit')
}
emitter.emit('result', result, raw, soapHeader)
})
})
}
return emitter
}
Client.prototype.close = function () {
const self = this
self.emit('close')
}
Client.prototype._connect = function () {
const self = this
if (self.status !== 'disconnected') {
return
}
self.status = 'connecting'
soap.createClient(
self._vcUrl,
self.clientopts,
function (err, client) {
if (err) {
self.emit('error', err)
throw err
}
self.client = client // save client for later use
self
.runCommand('RetrieveServiceContent', { _this: 'ServiceInstance' })
.once('result', function (result, raw, soapHeader) {
if (!result.returnval) {
self.status = 'disconnected'
self.emit('error', raw)
return
}
self.serviceContent = result.returnval
self.sessionManager = result.returnval.sessionManager
const loginArgs = { _this: self.sessionManager, ...self._loginArgs }
self
.runCommand('Login', loginArgs)
.once('result', function (result, raw, soapHeader) {
self.authCookie = new Cookie(client.lastResponseHeaders)
self.client.setSecurity(self.authCookie) // needed since vSphere SOAP WS uses cookies
self.userName = result.returnval.userName
self.fullName = result.returnval.fullName
self.reconnectCount = 0
self.status = 'ready'
self.emit('ready')
process.once('beforeExit', self._close)
})
.once('error', function (err) {
self.status = 'disconnected'
self.emit('error', err)
})
})
.once('error', function (err) {
self.status = 'disconnected'
self.emit('error', err)
})
},
self._vcUrl
)
}
Client.prototype._close = function () {
const self = this
if (self.status === 'ready') {
self
.runCommand('Logout', { _this: self.sessionManager })
.once('result', function () {
self.status = 'disconnected'
})
.once('error', function () {
/* don't care of error during disconnection */
self.status = 'disconnected'
})
} else {
self.status = 'disconnected'
}
}
function _soapErrorHandler(self, emitter, command, args, err) {
err = err || { body: 'general error' }
if (err.body.match(/session is not authenticated/)) {
self.status = 'disconnected'
process.removeAllListeners('beforeExit')
if (self.reconnectCount < 10) {
self.reconnectCount += 1
self
.runCommand(command, args)
.once('result', function (result, raw, soapHeader) {
emitter.emit('result', result, raw, soapHeader)
})
.once('error', function (err) {
emitter.emit('error', err.body)
throw err
})
} else {
emitter.emit('error', err.body)
throw err
}
} else {
emitter.emit('error', err.body)
throw err
}
}
// end

View File

@@ -0,0 +1,38 @@
{
"name": "@vates/node-vsphere-soap",
"version": "2.0.0",
"description": "interface to vSphere SOAP/WSDL from node for interfacing with vCenter or ESXi, forked from node-vsphere-soap",
"main": "lib/client.mjs",
"author": "reedog117",
"repository": {
"directory": "@vates/node-vsphere-soap",
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"axios": "^1.4.0",
"soap": "^1.0.0",
"soap-cookie": "^0.10.1"
},
"devDependencies": {
"test": "^3.3.0"
},
"keywords": [
"vsphere",
"vcenter",
"api",
"soap",
"wsdl"
],
"preferGlobal": false,
"license": "MIT",
"private": false,
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@vates/node-vsphere-soap",
"engines": {
"node": ">=14"
},
"scripts": {
"postversion": "npm publish --access public"
}
}

View File

@@ -0,0 +1,11 @@
// place your own credentials here for a vCenter or ESXi server
// this information will be used for connecting to a vCenter instance
// for module testing
// name the file config-test.js
export const vCenterTestCreds = {
vCenterIP: 'vcsa',
vCenterUser: 'vcuser',
vCenterPassword: 'vcpw',
vCenter: true,
}

View File

@@ -0,0 +1,138 @@
/*
vsphere-soap.test.js
tests for the vCenterConnectionInstance class
*/
import assert from 'assert'
import { describe, it } from 'test'
import * as vc from '../lib/client.mjs'
// eslint-disable-next-line n/no-missing-import
import { vCenterTestCreds as TestCreds } from '../config-test.mjs'
const VItest = new vc.Client(TestCreds.vCenterIP, TestCreds.vCenterUser, TestCreds.vCenterPassword, false)
describe('Client object initialization:', function () {
it('provides a successful login', { timeout: 5000 }, function (t, done) {
VItest.once('ready', function () {
assert.notEqual(VItest.userName, null)
assert.notEqual(VItest.fullName, null)
assert.notEqual(VItest.serviceContent, null)
done()
}).once('error', function (err) {
console.error(err)
// this should fail if there's a problem
assert.notEqual(VItest.userName, null)
assert.notEqual(VItest.fullName, null)
assert.notEqual(VItest.serviceContent, null)
done()
})
})
})
describe('Client reconnection test:', function () {
it('can successfully reconnect', { timeout: 5000 }, function (t, done) {
VItest.runCommand('Logout', { _this: VItest.serviceContent.sessionManager })
.once('result', function (result) {
// now we're logged out, so let's try running a command to test automatic re-login
VItest.runCommand('CurrentTime', { _this: 'ServiceInstance' })
.once('result', function (result) {
assert(result.returnval instanceof Date)
done()
})
.once('error', function (err) {
console.error(err)
})
})
.once('error', function (err) {
console.error(err)
})
})
})
// these tests don't work yet
describe('Client tests - query commands:', function () {
it('retrieves current time', { timeout: 5000 }, function (t, done) {
VItest.runCommand('CurrentTime', { _this: 'ServiceInstance' }).once('result', function (result) {
assert(result.returnval instanceof Date)
done()
})
})
it('retrieves current time 2 (check for event clobbering)', { timeout: 5000 }, function (t, done) {
VItest.runCommand('CurrentTime', { _this: 'ServiceInstance' }).once('result', function (result) {
assert(result.returnval instanceof Date)
done()
})
})
it('can obtain the names of all Virtual Machines in the inventory', { timeout: 20000 }, function (t, done) {
// get property collector
const propertyCollector = VItest.serviceContent.propertyCollector
// get view manager
const viewManager = VItest.serviceContent.viewManager
// get root folder
const rootFolder = VItest.serviceContent.rootFolder
let containerView, objectSpec, traversalSpec, propertySpec, propertyFilterSpec
// this is the equivalent to
VItest.runCommand('CreateContainerView', {
_this: viewManager,
container: rootFolder,
type: ['VirtualMachine'],
recursive: true,
}).once('result', function (result) {
// build all the data structures needed to query all the vm names
containerView = result.returnval
objectSpec = {
attributes: { 'xsi:type': 'ObjectSpec' }, // setting attributes xsi:type is important or else the server may mis-recognize types!
obj: containerView,
skip: true,
}
traversalSpec = {
attributes: { 'xsi:type': 'TraversalSpec' },
name: 'traverseEntities',
type: 'ContainerView',
path: 'view',
skip: false,
}
objectSpec = { ...objectSpec, selectSet: [traversalSpec] }
propertySpec = {
attributes: { 'xsi:type': 'PropertySpec' },
type: 'VirtualMachine',
pathSet: ['name'],
}
propertyFilterSpec = {
attributes: { 'xsi:type': 'PropertyFilterSpec' },
propSet: [propertySpec],
objectSet: [objectSpec],
}
// TODO: research why it fails if propSet is declared after objectSet
VItest.runCommand('RetrievePropertiesEx', {
_this: propertyCollector,
specSet: [propertyFilterSpec],
options: { attributes: { type: 'RetrieveOptions' } },
})
.once('result', function (result, raw) {
assert.notEqual(result.returnval.objects, null)
if (Array.isArray(result.returnval.objects)) {
assert.strictEqual(result.returnval.objects[0].obj.attributes.type, 'VirtualMachine')
} else {
assert.strictEqual(result.returnval.objects.obj.attributes.type, 'VirtualMachine')
}
done()
})
.once('error', function (err) {
console.error('\n\nlast request : ' + VItest.client.lastRequest, err)
})
})
})
})

View File

@@ -1,6 +1,7 @@
'use strict'
const assert = require('assert')
const isUtf8 = require('isutf8')
/**
* Read a chunk of data from a stream.
@@ -21,41 +22,41 @@ const readChunk = (stream, size) =>
stream.errored != null
? Promise.reject(stream.errored)
: stream.closed || stream.readableEnded
? Promise.resolve(null)
: new Promise((resolve, reject) => {
if (size !== undefined) {
assert(size > 0)
? Promise.resolve(null)
: new Promise((resolve, reject) => {
if (size !== undefined) {
assert(size > 0)
// per Node documentation:
// > The size argument must be less than or equal to 1 GiB.
assert(size < 1073741824)
}
// per Node documentation:
// > The size argument must be less than or equal to 1 GiB.
assert(size < 1073741824)
}
function onEnd() {
resolve(null)
removeListeners()
}
function onError(error) {
reject(error)
removeListeners()
}
function onReadable() {
const data = stream.read(size)
if (data !== null) {
resolve(data)
function onEnd() {
resolve(null)
removeListeners()
}
}
function removeListeners() {
stream.removeListener('end', onEnd)
stream.removeListener('error', onError)
stream.removeListener('readable', onReadable)
}
stream.on('end', onEnd)
stream.on('error', onError)
stream.on('readable', onReadable)
onReadable()
})
function onError(error) {
reject(error)
removeListeners()
}
function onReadable() {
const data = stream.read(size)
if (data !== null) {
resolve(data)
removeListeners()
}
}
function removeListeners() {
stream.removeListener('end', onEnd)
stream.removeListener('error', onError)
stream.removeListener('readable', onReadable)
}
stream.on('end', onEnd)
stream.on('error', onError)
stream.on('readable', onReadable)
onReadable()
})
exports.readChunk = readChunk
/**
@@ -81,6 +82,13 @@ exports.readChunkStrict = async function readChunkStrict(stream, size) {
if (size !== undefined && chunk.length !== size) {
const error = new Error(`stream has ended with not enough data (actual: ${chunk.length}, expected: ${size})`)
// Buffer.isUtf8 is too recent for now
// @todo : replace external package by Buffer.isUtf8 when the supported version of node reach 18
if (chunk.length < 1024 && isUtf8(chunk)) {
error.text = chunk.toString('utf8')
}
Object.defineProperties(error, {
chunk: {
value: chunk,
@@ -103,42 +111,42 @@ async function skip(stream, size) {
return stream.errored != null
? Promise.reject(stream.errored)
: size === 0 || stream.closed || stream.readableEnded
? Promise.resolve(0)
: new Promise((resolve, reject) => {
let left = size
function onEnd() {
resolve(size - left)
removeListeners()
}
function onError(error) {
reject(error)
removeListeners()
}
function onReadable() {
const data = stream.read()
left -= data === null ? 0 : data.length
if (left > 0) {
// continue to read
} else {
// if more than wanted has been read, push back the rest
if (left < 0) {
stream.unshift(data.slice(left))
}
resolve(size)
? Promise.resolve(0)
: new Promise((resolve, reject) => {
let left = size
function onEnd() {
resolve(size - left)
removeListeners()
}
}
function removeListeners() {
stream.removeListener('end', onEnd)
stream.removeListener('error', onError)
stream.removeListener('readable', onReadable)
}
stream.on('end', onEnd)
stream.on('error', onError)
stream.on('readable', onReadable)
onReadable()
})
function onError(error) {
reject(error)
removeListeners()
}
function onReadable() {
const data = stream.read()
left -= data === null ? 0 : data.length
if (left > 0) {
// continue to read
} else {
// if more than wanted has been read, push back the rest
if (left < 0) {
stream.unshift(data.slice(left))
}
resolve(size)
removeListeners()
}
}
function removeListeners() {
stream.removeListener('end', onEnd)
stream.removeListener('error', onError)
stream.removeListener('readable', onReadable)
}
stream.on('end', onEnd)
stream.on('error', onError)
stream.on('readable', onReadable)
onReadable()
})
}
exports.skip = skip

View File

@@ -102,12 +102,37 @@ describe('readChunkStrict', function () {
assert.strictEqual(error.chunk, undefined)
})
it('throws if stream ends with not enough data', async () => {
it('throws if stream ends with not enough data, utf8', async () => {
const error = await rejectionOf(readChunkStrict(makeStream(['foo', 'bar']), 10))
assert(error instanceof Error)
assert.strictEqual(error.message, 'stream has ended with not enough data (actual: 6, expected: 10)')
assert.strictEqual(error.text, 'foobar')
assert.deepEqual(error.chunk, Buffer.from('foobar'))
})
it('throws if stream ends with not enough data, non utf8 ', async () => {
const source = [Buffer.alloc(10, 128), Buffer.alloc(10, 128)]
const error = await rejectionOf(readChunkStrict(makeStream(source), 30))
assert(error instanceof Error)
assert.strictEqual(error.message, 'stream has ended with not enough data (actual: 20, expected: 30)')
assert.strictEqual(error.text, undefined)
assert.deepEqual(error.chunk, Buffer.concat(source))
})
it('throws if stream ends with not enough data, utf8 , long data', async () => {
const source = Buffer.from('a'.repeat(1500))
const error = await rejectionOf(readChunkStrict(makeStream([source]), 2000))
assert(error instanceof Error)
assert.strictEqual(error.message, `stream has ended with not enough data (actual: 1500, expected: 2000)`)
assert.strictEqual(error.text, undefined)
assert.deepEqual(error.chunk, source)
})
it('succeed', async () => {
const source = Buffer.from('a'.repeat(20))
const chunk = await readChunkStrict(makeStream([source]), 10)
assert.deepEqual(source.subarray(10), chunk)
})
})
describe('skip', function () {
@@ -134,6 +159,16 @@ describe('skip', function () {
it('returns less size if stream ends', async () => {
assert.deepEqual(await skip(makeStream('foo bar'), 10), 7)
})
it('put back if it read too much', async () => {
let source = makeStream(['foo', 'bar'])
await skip(source, 1) // read part of data chunk
const chunk = (await readChunkStrict(source, 2)).toString('utf-8')
assert.strictEqual(chunk, 'oo')
source = makeStream(['foo', 'bar'])
assert.strictEqual(await skip(source, 3), 3) // read aligned with data chunk
})
})
describe('skipStrict', function () {
@@ -144,4 +179,9 @@ describe('skipStrict', function () {
assert.strictEqual(error.message, 'stream has ended with not enough data (actual: 7, expected: 10)')
assert.deepEqual(error.bytesSkipped, 7)
})
it('succeed', async () => {
const source = makeStream(['foo', 'bar', 'baz'])
const res = await skipStrict(source, 4)
assert.strictEqual(res, undefined)
})
})

View File

@@ -19,7 +19,7 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "1.1.1",
"version": "1.2.0",
"engines": {
"node": ">=8.10"
},
@@ -33,5 +33,8 @@
},
"devDependencies": {
"test": "^3.2.1"
},
"dependencies": {
"isutf8": "^4.0.0"
}
}

View File

@@ -27,7 +27,7 @@
"license": "ISC",
"version": "0.1.0",
"engines": {
"node": ">=10"
"node": ">=12.3"
},
"scripts": {
"postversion": "npm publish --access public",

View File

@@ -2,10 +2,8 @@
import { Task } from '@vates/task'
const task = new Task({
// data in this object will be sent along the *start* event
//
// property names should be chosen as not to clash with properties used by `Task` or `combineEvents`
data: {
// this object will be sent in the *start* event
properties: {
name: 'my task',
},
@@ -16,13 +14,15 @@ const task = new Task({
// this function is called each time this task or one of it's subtasks change state
const { id, timestamp, type } = event
if (type === 'start') {
const { name, parentId } = event
const { name, parentId, properties } = event
} else if (type === 'end') {
const { result, status } = event
} else if (type === 'info' || type === 'warning') {
const { data, message } = event
} else if (type === 'property') {
const { name, value } = event
} else if (type === 'abortionRequested') {
const { reason } = event
}
},
})
@@ -36,7 +36,6 @@ task.id
// - pending
// - success
// - failure
// - aborted
task.status
// Triggers the abort signal associated to the task.
@@ -89,6 +88,30 @@ const onProgress = makeOnProgress({
onRootTaskStart(taskLog) {
// `taskLog` is an object reflecting the state of this task and all its subtasks,
// and will be mutated in real-time to reflect the changes of the task.
// timestamp at which the task started
taskLog.start
// current status of the task as described in the previous section
taskLog.status
// undefined or a dictionary of properties attached to the task
taskLog.properties
// timestamp at which the abortion was requested, undefined otherwise
taskLog.abortionRequestedAt
// undefined or an array of infos emitted on the task
taskLog.infos
// undefined or an array of warnings emitted on the task
taskLog.warnings
// timestamp at which the task ended, undefined otherwise
taskLog.end
// undefined or the result value of the task
taskLog.result
},
// This function is called each time a root task ends.

View File

@@ -18,10 +18,8 @@ npm install --save @vates/task
import { Task } from '@vates/task'
const task = new Task({
// data in this object will be sent along the *start* event
//
// property names should be chosen as not to clash with properties used by `Task` or `combineEvents`
data: {
// this object will be sent in the *start* event
properties: {
name: 'my task',
},
@@ -32,13 +30,15 @@ const task = new Task({
// this function is called each time this task or one of it's subtasks change state
const { id, timestamp, type } = event
if (type === 'start') {
const { name, parentId } = event
const { name, parentId, properties } = event
} else if (type === 'end') {
const { result, status } = event
} else if (type === 'info' || type === 'warning') {
const { data, message } = event
} else if (type === 'property') {
const { name, value } = event
} else if (type === 'abortionRequested') {
const { reason } = event
}
},
})
@@ -52,7 +52,6 @@ task.id
// - pending
// - success
// - failure
// - aborted
task.status
// Triggers the abort signal associated to the task.
@@ -105,6 +104,30 @@ const onProgress = makeOnProgress({
onRootTaskStart(taskLog) {
// `taskLog` is an object reflecting the state of this task and all its subtasks,
// and will be mutated in real-time to reflect the changes of the task.
// timestamp at which the task started
taskLog.start
// current status of the task as described in the previous section
taskLog.status
// undefined or a dictionary of properties attached to the task
taskLog.properties
// timestamp at which the abortion was requested, undefined otherwise
taskLog.abortionRequestedAt
// undefined or an array of infos emitted on the task
taskLog.infos
// undefined or an array of warnings emitted on the task
taskLog.warnings
// timestamp at which the task ended, undefined otherwise
taskLog.end
// undefined or the result value of the task
taskLog.result
},
// This function is called each time a root task ends.

View File

@@ -4,36 +4,18 @@ const assert = require('node:assert').strict
const noop = Function.prototype
function omit(source, keys, target = { __proto__: null }) {
for (const key of Object.keys(source)) {
if (!keys.has(key)) {
target[key] = source[key]
}
}
return target
}
const IGNORED_START_PROPS = new Set([
'end',
'infos',
'properties',
'result',
'status',
'tasks',
'timestamp',
'type',
'warnings',
])
exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noop, onTaskUpdate = noop }) {
const taskLogs = new Map()
return function onProgress(event) {
const { id, type } = event
let taskLog
if (type === 'start') {
taskLog = omit(event, IGNORED_START_PROPS)
taskLog.start = event.timestamp
taskLog.status = 'pending'
taskLog = {
id,
properties: { __proto__: null, ...event.properties },
start: event.timestamp,
status: 'pending',
}
taskLogs.set(id, taskLog)
const { parentId } = event
@@ -65,6 +47,8 @@ exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noo
taskLog.end = event.timestamp
taskLog.result = event.result
taskLog.status = event.status
} else if (type === 'abortionRequested') {
taskLog.abortionRequestedAt = event.timestamp
}
if (type === 'end' && taskLog.$root === taskLog) {

View File

@@ -11,7 +11,7 @@ describe('makeOnProgress()', function () {
const events = []
let log
const task = new Task({
data: { name: 'task' },
properties: { name: 'task' },
onProgress: makeOnProgress({
onRootTaskStart(log_) {
assert.equal(log, undefined)
@@ -32,36 +32,50 @@ describe('makeOnProgress()', function () {
assert.equal(events.length, 0)
let i = 0
await task.run(async () => {
assert.equal(events[0], 'onRootTaskStart')
assert.equal(events[1], 'onTaskUpdate')
assert.equal(log.name, 'task')
assert.equal(events[i++], 'onRootTaskStart')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.id, task.id)
assert.equal(log.properties.name, 'task')
assert(Math.abs(log.start - Date.now()) < 10)
Task.set('name', 'new name')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.name, 'new name')
Task.set('progress', 0)
assert.equal(events[2], 'onTaskUpdate')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.progress, 0)
Task.info('foo', {})
assert.equal(events[3], 'onTaskUpdate')
assert.equal(events[i++], 'onTaskUpdate')
assert.deepEqual(log.infos, [{ data: {}, message: 'foo' }])
await Task.run({ data: { name: 'subtask' } }, () => {
assert.equal(events[4], 'onTaskUpdate')
assert.equal(log.tasks[0].name, 'subtask')
const subtask = new Task({ properties: { name: 'subtask' } })
await subtask.run(() => {
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.tasks[0].properties.name, 'subtask')
Task.warning('bar', {})
assert.equal(events[5], 'onTaskUpdate')
assert.equal(events[i++], 'onTaskUpdate')
assert.deepEqual(log.tasks[0].warnings, [{ data: {}, message: 'bar' }])
subtask.abort()
assert.equal(events[i++], 'onTaskUpdate')
assert(Math.abs(log.tasks[0].abortionRequestedAt - Date.now()) < 10)
})
assert.equal(events[6], 'onTaskUpdate')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.tasks[0].status, 'success')
Task.set('progress', 100)
assert.equal(events[7], 'onTaskUpdate')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.progress, 100)
})
assert.equal(events[8], 'onRootTaskEnd')
assert.equal(events[9], 'onTaskUpdate')
assert.equal(events[i++], 'onRootTaskEnd')
assert.equal(events[i++], 'onTaskUpdate')
assert(Math.abs(log.end - Date.now()) < 10)
assert.equal(log.status, 'success')
})
})

View File

@@ -10,11 +10,10 @@ function define(object, property, value) {
const noop = Function.prototype
const ABORTED = 'aborted'
const FAILURE = 'failure'
const PENDING = 'pending'
const SUCCESS = 'success'
exports.STATUS = { ABORTED, FAILURE, PENDING, SUCCESS }
exports.STATUS = { FAILURE, PENDING, SUCCESS }
// stored in the global context so that various versions of the library can interact.
const asyncStorageKey = '@vates/task@0'
@@ -83,8 +82,8 @@ exports.Task = class Task {
return this.#status
}
constructor({ data = {}, onProgress } = {}) {
this.#startData = data
constructor({ properties, onProgress } = {}) {
this.#startData = { properties }
if (onProgress !== undefined) {
this.#onProgress = onProgress
@@ -105,12 +104,16 @@ exports.Task = class Task {
const { signal } = this.#abortController
signal.addEventListener('abort', () => {
if (this.status === PENDING && !this.#running) {
if (this.status === PENDING) {
this.#maybeStart()
const status = ABORTED
this.#status = status
this.#emit('end', { result: signal.reason, status })
this.#emit('abortionRequested', { reason: signal.reason })
if (!this.#running) {
const status = FAILURE
this.#status = status
this.#emit('end', { result: signal.reason, status })
}
}
})
}
@@ -156,9 +159,7 @@ exports.Task = class Task {
this.#running = false
return result
} catch (result) {
const { signal } = this.#abortController
const aborted = signal.aborted && result === signal.reason
const status = aborted ? ABORTED : FAILURE
const status = FAILURE
this.#status = status
this.#emit('end', { status, result })

View File

@@ -15,7 +15,7 @@ function assertEvent(task, expected, eventIndex = -1) {
assert.equal(typeof actual.id, 'string')
assert.equal(typeof actual.timestamp, 'number')
for (const keys of Object.keys(expected)) {
assert.equal(actual[keys], expected[keys])
assert.deepEqual(actual[keys], expected[keys])
}
}
@@ -30,10 +30,10 @@ function createTask(opts) {
describe('Task', function () {
describe('contructor', function () {
it('data properties are passed to the start event', async function () {
const data = { foo: 0, bar: 1 }
const task = createTask({ data })
const properties = { foo: 0, bar: 1 }
const task = createTask({ properties })
await task.run(noop)
assertEvent(task, { ...data, type: 'start' }, 0)
assertEvent(task, { type: 'start', properties }, 0)
})
})
@@ -79,20 +79,22 @@ describe('Task', function () {
})
.catch(noop)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result: reason }, 2)
})
it('does not abort if the task fails without the abort reason', async function () {
const task = createTask()
const reason = {}
const result = new Error()
await task
.run(() => {
task.abort({})
task.abort(reason)
throw result
})
@@ -100,18 +102,20 @@ describe('Task', function () {
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'failure', result }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result }, 2)
})
it('does not abort if the task succeed', async function () {
const task = createTask()
const reason = {}
const result = {}
await task
.run(() => {
task.abort({})
task.abort(reason)
return result
})
@@ -119,9 +123,10 @@ describe('Task', function () {
assert.equal(task.status, 'success')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'success', result }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'success', result }, 2)
})
it('aborts before task is running', function () {
@@ -130,11 +135,12 @@ describe('Task', function () {
task.abort(reason)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result: reason }, 2)
})
})
@@ -243,7 +249,7 @@ describe('Task', function () {
assert.equal(task.status, 'failure')
})
it('changes to aborted after run is complete', async function () {
it('changes to failure if aborted after run is complete', async function () {
const task = createTask()
await task
.run(() => {
@@ -252,13 +258,13 @@ describe('Task', function () {
Task.abortSignal.throwIfAborted()
})
.catch(noop)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
})
it('changes to aborted if aborted when not running', async function () {
it('changes to failure if aborted when not running', function () {
const task = createTask()
task.abort()
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
})
})

View File

@@ -13,7 +13,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.2",
"version": "0.2.0",
"engines": {
"node": ">=14"
},

View File

@@ -35,7 +35,7 @@
"test": "node--test"
},
"devDependencies": {
"sinon": "^15.0.1",
"sinon": "^17.0.1",
"test": "^3.2.1"
}
}

View File

@@ -1,5 +1,5 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { RemoteAdapter } from '@xen-orchestra/backups/RemoteAdapter.js'
import { RemoteAdapter } from '@xen-orchestra/backups/RemoteAdapter.mjs'
import { getSyncedHandler } from '@xen-orchestra/fs'
import getopts from 'getopts'
import { basename, dirname } from 'path'

View File

@@ -7,9 +7,9 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.36.1",
"@xen-orchestra/fs": "^3.3.4",
"filenamify": "^4.1.0",
"@xen-orchestra/backups": "^0.43.2",
"@xen-orchestra/fs": "^4.1.2",
"filenamify": "^6.0.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
"promise-toolbox": "^0.21.0"
@@ -27,7 +27,7 @@
"scripts": {
"postversion": "npm publish --access public"
},
"version": "1.0.6",
"version": "1.0.13",
"license": "AGPL-3.0-or-later",
"author": {
"name": "Vates SAS",

View File

@@ -1,307 +0,0 @@
'use strict'
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const pTimeout = require('promise-toolbox/timeout')
const { compileTemplate } = require('@xen-orchestra/template')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { extractIdsFromSimplePattern } = require('./extractIdsFromSimplePattern.js')
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
const { Task } = require('./Task.js')
const { VmBackup } = require('./_VmBackup.js')
const { XoMetadataBackup } = require('./_XoMetadataBackup.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const noop = Function.prototype
const getAdaptersByRemote = adapters => {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
const DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
const DEFAULT_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}
exports.Backup = class Backup {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const { type } = job
const baseSettings = { ...DEFAULT_SETTINGS }
if (type === 'backup') {
Object.assign(baseSettings, DEFAULT_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
this.run = this._runVmBackup
} else if (type === 'metadataBackup') {
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
this.run = this._runMetadataBackup
} else {
throw new Error(`No runner for the backup type ${type}`)
}
Object.assign(baseSettings, job.settings[''])
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
async _runMetadataBackup() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
async _runVmBackup() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () =>
new VmBackup({
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}).run()
)
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,17 @@
import { Metadata } from './_runners/Metadata.mjs'
import { VmsRemote } from './_runners/VmsRemote.mjs'
import { VmsXapi } from './_runners/VmsXapi.mjs'
export function createRunner(opts) {
const { type } = opts.job
switch (type) {
case 'backup':
return new VmsXapi(opts)
case 'mirrorBackup':
return new VmsRemote(opts)
case 'metadataBackup':
return new Metadata(opts)
default:
throw new Error(`No runner for the backup type ${type}`)
}
}

View File

@@ -1,8 +1,6 @@
'use strict'
import { asyncMap } from '@xen-orchestra/async-map'
const { asyncMap } = require('@xen-orchestra/async-map')
exports.DurablePartition = class DurablePartition {
export class DurablePartition {
// private resource API is used exceptionally to be able to separate resource creation and release
#partitionDisposers = {}

View File

@@ -1,8 +1,6 @@
'use strict'
import { Task } from './Task.mjs'
const { Task } = require('./Task')
exports.HealthCheckVmBackup = class HealthCheckVmBackup {
export class HealthCheckVmBackup {
#restoredVm
#timeout
#xapi

View File

@@ -1,73 +0,0 @@
'use strict'
const assert = require('assert')
const { formatFilenameDate } = require('./_filenameDate.js')
const { importDeltaVm } = require('./_deltaVm.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
exports.ImportVmBackup = class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
this._adapter = adapter
this._importDeltaVmSettings = { newMacAddresses, mapVdisSrs }
this._metadata = metadata
this._srUuid = srUuid
this._xapi = xapi
}
async run() {
const adapter = this._adapter
const metadata = this._metadata
const isFull = metadata.mode === 'full'
const sizeContainer = { size: 0 }
let backup
if (isFull) {
backup = await adapter.readFullVmBackup(metadata)
watchStreamSize(backup, sizeContainer)
} else {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(this._importDeltaVmSettings.mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await adapter.readDeltaVmBackup(metadata, ignoredVdis)
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}
return Task.run(
{
name: 'transfer',
},
async () => {
const xapi = this._xapi
const srRef = await xapi.call('SR.get_by_uuid', this._srUuid)
const vmRef = isFull
? await xapi.VM_import(backup, srRef)
: await importDeltaVm(backup, await xapi.getRecord('SR', srRef), {
...this._importDeltaVmSettings,
detectBase: false,
})
await Promise.all([
xapi.call('VM.add_tags', vmRef, 'restored from backup'),
xapi.call(
'VM.set_name_label',
vmRef,
`${metadata.vm.name_label} (${formatFilenameDate(metadata.timestamp)})`
),
])
return {
size: sizeContainer.size,
id: await xapi.getField('VM', vmRef, 'uuid'),
}
}
)
}
}

View File

@@ -0,0 +1,103 @@
import assert from 'node:assert'
import { formatFilenameDate } from './_filenameDate.mjs'
import { importIncrementalVm } from './_incrementalVm.mjs'
import { Task } from './Task.mjs'
import { watchStreamSize } from './_watchStreamSize.mjs'
async function resolveUuid(xapi, cache, uuid, type) {
if (uuid == null) {
return uuid
}
const ref = cache.get(uuid)
if (ref === undefined) {
cache.set(uuid, xapi.call(`${type}.get_by_uuid`, uuid))
}
return cache.get(uuid)
}
export class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
this._adapter = adapter
this._importIncrementalVmSettings = { newMacAddresses, mapVdisSrs }
this._metadata = metadata
this._srUuid = srUuid
this._xapi = xapi
}
async #decorateIncrementalVmMetadata(backup) {
const { mapVdisSrs } = this._importIncrementalVmSettings
const xapi = this._xapi
const cache = new Map()
const mapVdisSrRefs = {}
for (const [vdiUuid, srUuid] of Object.entries(mapVdisSrs)) {
mapVdisSrRefs[vdiUuid] = await resolveUuid(xapi, cache, srUuid, 'SR')
}
const sr = await resolveUuid(xapi, cache, this._srUuid, 'SR')
Object.values(backup.vdis).forEach(vdi => {
vdi.SR = mapVdisSrRefs[vdi.uuid] ?? sr.$ref
})
return backup
}
async run() {
const adapter = this._adapter
const metadata = this._metadata
const isFull = metadata.mode === 'full'
const sizeContainer = { size: 0 }
const { mapVdisSrs, newMacAddresses } = this._importIncrementalVmSettings
let backup
if (isFull) {
backup = await adapter.readFullVmBackup(metadata)
watchStreamSize(backup, sizeContainer)
} else {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await this.#decorateIncrementalVmMetadata(await adapter.readIncrementalVmBackup(metadata, ignoredVdis))
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}
return Task.run(
{
name: 'transfer',
},
async () => {
const xapi = this._xapi
const srRef = await xapi.call('SR.get_by_uuid', this._srUuid)
const vmRef = isFull
? await xapi.VM_import(backup, srRef)
: await importIncrementalVm(backup, await xapi.getRecord('SR', srRef), {
newMacAddresses,
})
await Promise.all([
xapi.call('VM.add_tags', vmRef, 'restored from backup'),
xapi.call(
'VM.set_name_label',
vmRef,
`${metadata.vm.name_label} (${formatFilenameDate(metadata.timestamp)})`
),
xapi.call(
'VM.set_name_description',
vmRef,
`Restored on ${formatFilenameDate(+new Date())} from ${adapter._handler._remote.name} -
${metadata.vm.name_description}
`
),
])
return {
size: sizeContainer.size,
id: await xapi.getField('VM', vmRef, 'uuid'),
}
}
)
}
}

View File

@@ -1,43 +1,39 @@
'use strict'
import { asyncEach } from '@vates/async-each'
import { asyncMap, asyncMapSettled } from '@xen-orchestra/async-map'
import { compose } from '@vates/compose'
import { createLogger } from '@xen-orchestra/log'
import { createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } from 'vhd-lib'
import { decorateMethodsWith } from '@vates/decorate-with'
import { deduped } from '@vates/disposable/deduped.js'
import { dirname, join, resolve } from 'node:path'
import { execFile } from 'child_process'
import { mount } from '@vates/fuse-vhd'
import { readdir, lstat } from 'node:fs/promises'
import { synchronized } from 'decorator-synchronized'
import { v4 as uuidv4 } from 'uuid'
import { ZipFile } from 'yazl'
import Disposable from 'promise-toolbox/Disposable'
import fromCallback from 'promise-toolbox/fromCallback'
import fromEvent from 'promise-toolbox/fromEvent'
import groupBy from 'lodash/groupBy.js'
import pDefer from 'promise-toolbox/defer'
import pickBy from 'lodash/pickBy.js'
import tar from 'tar'
import zlib from 'zlib'
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { synchronized } = require('decorator-synchronized')
const Disposable = require('promise-toolbox/Disposable')
const fromCallback = require('promise-toolbox/fromCallback')
const fromEvent = require('promise-toolbox/fromEvent')
const pDefer = require('promise-toolbox/defer')
const groupBy = require('lodash/groupBy.js')
const pickBy = require('lodash/pickBy.js')
const { dirname, join, normalize, resolve } = require('path')
const { createLogger } = require('@xen-orchestra/log')
const { createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } = require('vhd-lib')
const { deduped } = require('@vates/disposable/deduped.js')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { compose } = require('@vates/compose')
const { execFile } = require('child_process')
const { readdir, lstat } = require('fs-extra')
const { v4: uuidv4 } = require('uuid')
const { ZipFile } = require('yazl')
const zlib = require('zlib')
import { BACKUP_DIR } from './_getVmBackupDir.mjs'
import { cleanVm } from './_cleanVm.mjs'
import { formatFilenameDate } from './_filenameDate.mjs'
import { getTmpDir } from './_getTmpDir.mjs'
import { isMetadataFile } from './_backupType.mjs'
import { isValidXva } from './_isValidXva.mjs'
import { listPartitions, LVM_PARTITION_TYPE } from './_listPartitions.mjs'
import { lvs, pvs } from './_lvm.mjs'
import { watchStreamSize } from './_watchStreamSize.mjs'
const { BACKUP_DIR } = require('./_getVmBackupDir.js')
const { cleanVm } = require('./_cleanVm.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { getTmpDir } = require('./_getTmpDir.js')
const { isMetadataFile } = require('./_backupType.js')
const { isValidXva } = require('./_isValidXva.js')
const { listPartitions, LVM_PARTITION_TYPE } = require('./_listPartitions.js')
const { lvs, pvs } = require('./_lvm.js')
const { watchStreamSize } = require('./_watchStreamSize')
// @todo : this import is marked extraneous , sould be fixed when lib is published
const { mount } = require('@vates/fuse-vhd')
const { asyncEach } = require('@vates/async-each')
export const DIR_XO_CONFIG_BACKUPS = 'xo-config-backups'
const DIR_XO_CONFIG_BACKUPS = 'xo-config-backups'
exports.DIR_XO_CONFIG_BACKUPS = DIR_XO_CONFIG_BACKUPS
const DIR_XO_POOL_METADATA_BACKUPS = 'xo-pool-metadata-backups'
exports.DIR_XO_POOL_METADATA_BACKUPS = DIR_XO_POOL_METADATA_BACKUPS
export const DIR_XO_POOL_METADATA_BACKUPS = 'xo-pool-metadata-backups'
const { debug, warn } = createLogger('xo:backups:RemoteAdapter')
@@ -46,20 +42,23 @@ const compareTimestamp = (a, b) => a.timestamp - b.timestamp
const noop = Function.prototype
const resolveRelativeFromFile = (file, path) => resolve('/', dirname(file), path).slice(1)
const makeRelative = path => resolve('/', path).slice(1)
const resolveSubpath = (root, path) => resolve(root, makeRelative(path))
const resolveSubpath = (root, path) => resolve(root, `.${resolve('/', path)}`)
async function addZipEntries(zip, realBasePath, virtualBasePath, relativePaths) {
for (const relativePath of relativePaths) {
const realPath = join(realBasePath, relativePath)
const virtualPath = join(virtualBasePath, relativePath)
async function addDirectory(files, realPath, metadataPath) {
const stats = await lstat(realPath)
if (stats.isDirectory()) {
await asyncMap(await readdir(realPath), file =>
addDirectory(files, realPath + '/' + file, metadataPath + '/' + file)
)
} else if (stats.isFile()) {
files.push({
realPath,
metadataPath,
})
const stats = await lstat(realPath)
const { mode, mtime } = stats
const opts = { mode, mtime }
if (stats.isDirectory()) {
zip.addEmptyDirectory(virtualPath, opts)
await addZipEntries(zip, realPath, virtualPath, await readdir(realPath))
} else if (stats.isFile()) {
zip.addFile(realPath, virtualPath, opts)
}
}
}
@@ -76,7 +75,7 @@ const debounceResourceFactory = factory =>
return this._debounceResource(factory.apply(this, arguments))
}
class RemoteAdapter {
export class RemoteAdapter {
constructor(
handler,
{ debounceResource = res => res, dirMode, vhdDirectoryCompression, useGetDiskLegacy = false } = {}
@@ -187,17 +186,6 @@ class RemoteAdapter {
})
}
async *_usePartitionFiles(diskId, partitionId, paths) {
const path = yield this.getPartition(diskId, partitionId)
const files = []
await asyncMap(paths, file =>
addDirectory(files, resolveSubpath(path, file), normalize('./' + file).replace(/\/+$/, ''))
)
return files
}
// check if we will be allowed to merge a a vhd created in this adapter
// with the vhd at path `path`
async isMergeableParent(packedParentUid, path) {
@@ -214,15 +202,24 @@ class RemoteAdapter {
})
}
fetchPartitionFiles(diskId, partitionId, paths) {
fetchPartitionFiles(diskId, partitionId, paths, format) {
const { promise, reject, resolve } = pDefer()
Disposable.use(
async function* () {
const files = yield this._usePartitionFiles(diskId, partitionId, paths)
const zip = new ZipFile()
files.forEach(({ realPath, metadataPath }) => zip.addFile(realPath, metadataPath))
zip.end()
const { outputStream } = zip
const path = yield this.getPartition(diskId, partitionId)
let outputStream
if (format === 'tgz') {
outputStream = tar.c({ cwd: path, gzip: true }, paths.map(makeRelative))
} else if (format === 'zip') {
const zip = new ZipFile()
await addZipEntries(zip, path, '', paths.map(makeRelative))
zip.end()
;({ outputStream } = zip)
} else {
throw new Error('unsupported format ' + format)
}
resolve(outputStream)
await fromEvent(outputStream, 'end')
}.bind(this)
@@ -333,7 +330,7 @@ class RemoteAdapter {
const RE_VHDI = /^vhdi(\d+)$/
const handler = this._handler
const diskPath = handler._getFilePath('/' + diskId)
const diskPath = handler.getFilePath('/' + diskId)
const mountDir = yield getTmpDir()
await fromCallback(execFile, 'vhdimount', [diskPath, mountDir])
try {
@@ -404,20 +401,27 @@ class RemoteAdapter {
return `${baseName}.vhd`
}
async listAllVmBackups() {
async listAllVms() {
const handler = this._handler
const backups = { __proto__: null }
await asyncMap(await handler.list(BACKUP_DIR), async entry => {
const vmsUuids = []
await asyncEach(await handler.list(BACKUP_DIR), async entry => {
// ignore hidden and lock files
if (entry[0] !== '.' && !entry.endsWith('.lock')) {
const vmBackups = await this.listVmBackups(entry)
if (vmBackups.length !== 0) {
backups[entry] = vmBackups
}
vmsUuids.push(entry)
}
})
return vmsUuids
}
async listAllVmBackups() {
const vmsUuids = await this.listAllVms()
const backups = { __proto__: null }
await asyncEach(vmsUuids, async vmUuid => {
const vmBackups = await this.listVmBackups(vmUuid)
if (vmBackups.length !== 0) {
backups[vmUuid] = vmBackups
}
})
return backups
}
@@ -677,11 +681,13 @@ class RemoteAdapter {
}
}
async outputStream(path, input, { checksum = true, validator = noop } = {}) {
async outputStream(path, input, { checksum = true, maxStreamLength, streamLength, validator = noop } = {}) {
const container = watchStreamSize(input)
await this._handler.outputStream(path, input, {
checksum,
dirMode: this._dirMode,
maxStreamLength,
streamLength,
async validator() {
await input.task
return validator.apply(this, arguments)
@@ -691,8 +697,8 @@ class RemoteAdapter {
}
// open the hierarchy of ancestors until we find a full one
async _createSyntheticStream(handler, path) {
const disposableSynthetic = await VhdSynthetic.fromVhdChain(handler, path)
async _createVhdStream(handler, path, { useChain }) {
const disposableSynthetic = useChain ? await VhdSynthetic.fromVhdChain(handler, path) : await openVhd(handler, path)
// I don't want the vhds to be disposed on return
// but only when the stream is done ( or failed )
@@ -717,7 +723,7 @@ class RemoteAdapter {
return stream
}
async readDeltaVmBackup(metadata, ignoredVdis) {
async readIncrementalVmBackup(metadata, ignoredVdis, { useChain = true } = {}) {
const handler = this._handler
const { vbds, vhds, vifs, vm, vmSnapshot } = metadata
const dir = dirname(metadata._filename)
@@ -725,7 +731,7 @@ class RemoteAdapter {
const streams = {}
await asyncMapSettled(Object.keys(vdis), async ref => {
streams[`${ref}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[ref]))
streams[`${ref}.vhd`] = await this._createVhdStream(handler, join(dir, vhds[ref]), { useChain })
})
return {
@@ -822,11 +828,7 @@ decorateMethodsWith(RemoteAdapter, {
debounceResourceFactory,
]),
_usePartitionFiles: Disposable.factory,
getDisk: compose([Disposable.factory, [deduped, diskId => [diskId]], debounceResourceFactory]),
getPartition: Disposable.factory,
})
exports.RemoteAdapter = RemoteAdapter

View File

@@ -1,26 +0,0 @@
'use strict'
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { PATH_DB_DUMP } = require('./_PoolMetadataBackup.js')
exports.RestoreMetadataBackup = class RestoreMetadataBackup {
constructor({ backupId, handler, xapi }) {
this._backupId = backupId
this._handler = handler
this._xapi = xapi
}
async run() {
const backupId = this._backupId
const handler = this._handler
const xapi = this._xapi
if (backupId.split('/')[0] === DIR_XO_POOL_METADATA_BACKUPS) {
return xapi.putResource(await handler.createReadStream(`${backupId}/data`), PATH_DB_DUMP, {
task: xapi.task_create('Import pool metadata'),
})
} else {
return String(await handler.readFile(`${backupId}/data.json`))
}
}
}

View File

@@ -0,0 +1,32 @@
import { join, resolve } from 'node:path/posix'
import { DIR_XO_POOL_METADATA_BACKUPS } from './RemoteAdapter.mjs'
import { PATH_DB_DUMP } from './_runners/_PoolMetadataBackup.mjs'
export class RestoreMetadataBackup {
constructor({ backupId, handler, xapi }) {
this._backupId = backupId
this._handler = handler
this._xapi = xapi
}
async run() {
const backupId = this._backupId
const handler = this._handler
const xapi = this._xapi
if (backupId.split('/')[0] === DIR_XO_POOL_METADATA_BACKUPS) {
return xapi.putResource(await handler.createReadStream(`${backupId}/data`), PATH_DB_DUMP, {
task: xapi.task_create('Import pool metadata'),
})
} else {
const metadata = JSON.parse(await handler.readFile(join(backupId, 'metadata.json')))
const dataFileName = resolve(backupId, metadata.data ?? 'data.json')
const data = await handler.readFile(dataFileName)
// if data is JSON, sent it as a plain string, otherwise, consider the data as binary and encode it
const isJson = dataFileName.endsWith('.json')
return isJson ? data.toString() : { encoding: 'base64', data: data.toString('base64') }
}
}
}

View File

@@ -1,7 +1,5 @@
'use strict'
const CancelToken = require('promise-toolbox/CancelToken')
const Zone = require('node-zone')
import CancelToken from 'promise-toolbox/CancelToken'
import Zone from 'node-zone'
const logAfterEnd = log => {
const error = new Error('task has already ended')
@@ -30,7 +28,7 @@ const serializeError = error =>
const $$task = Symbol('@xen-orchestra/backups/Task')
class Task {
export class Task {
static get cancelToken() {
const task = Zone.current.data[$$task]
return task !== undefined ? task.#cancelToken : CancelToken.none
@@ -151,7 +149,6 @@ class Task {
})
}
}
exports.Task = Task
for (const method of ['info', 'warning']) {
Task[method] = (...args) => Zone.current.data[$$task]?.[method](...args)

View File

@@ -1,515 +0,0 @@
'use strict'
const assert = require('assert')
const findLast = require('lodash/findLast.js')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const vhdStreamValidator = require('vhd-lib/vhdStreamValidator.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { pipeline } = require('node:stream')
const { DeltaBackupWriter } = require('./writers/DeltaBackupWriter.js')
const { DeltaReplicationWriter } = require('./writers/DeltaReplicationWriter.js')
const { exportDeltaVm } = require('./_deltaVm.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { FullBackupWriter } = require('./writers/FullBackupWriter.js')
const { FullReplicationWriter } = require('./writers/FullReplicationWriter.js')
const { getOldEntries } = require('./_getOldEntries.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
const { debug, warn } = createLogger('xo:backups:VmBackup')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
const forkDeltaExport = deltaExport =>
Object.create(deltaExport, {
streams: {
value: mapValues(deltaExport.streams, forkStreamUnpipe),
},
})
const noop = Function.prototype
class VmBackup {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
this.vm = vm
const { tags } = this.vm
// VM (snapshot) that is really exported
this.exportedVm = undefined
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isDelta = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._isDelta
? [DeltaBackupWriter, DeltaReplicationWriter]
: [FullBackupWriter, FullReplicationWriter]
const allSettings = job.settings
Object.keys(remoteAdapters).forEach(remoteId => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(new BackupWriter({ backup: this, remoteId, settings: targetSettings }))
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(new ReplicationWriter({ backup: this, sr, settings: targetSettings }))
}
})
}
}
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const { vm } = this
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const { vm } = this
const xapi = this._xapi
const settings = this._settings
const doSnapshot =
settings.unconditionalSnapshot ||
this._isDelta ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
if (doSnapshot) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this.exportedVm = await xapi.getRecord('VM', snapshotRef)
return this.exportedVm.uuid
})
} else {
this.exportedVm = vm
this.timestamp = Date.now()
}
}
async _copyDelta() {
const { exportedVm } = this
const baseVm = this._baseVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportDeltaVm(exportedVm, baseVm, {
fullVdisRequired,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
sizeContainers,
timestamp,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _copyFull() {
const { compression } = this.job
const stream = this._throttleStream(
await this._xapi.VM_export(this.exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this.vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this.vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this.vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const { tags } = this.vm
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
return
}
await this._callWriters(writer => writer.healthCheck(this._healthCheckSr), 'writer.healthCheck()')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
if (this._isDelta) {
await this._selectBaseVm()
}
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const { vm } = this
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
await (this._isDelta ? this._copyDelta() : this._copyFull())
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
exports.VmBackup = VmBackup
decorateMethodsWith(VmBackup, {
run: defer,
})

View File

@@ -1,6 +0,0 @@
'use strict'
exports.isMetadataFile = filename => filename.endsWith('.json')
exports.isVhdFile = filename => filename.endsWith('.vhd')
exports.isXvaFile = filename => filename.endsWith('.xva')
exports.isXvaSumFile = filename => filename.endsWith('.xva.checksum')

View File

@@ -0,0 +1,4 @@
export const isMetadataFile = filename => filename.endsWith('.json')
export const isVhdFile = filename => filename.endsWith('.vhd')
export const isXvaFile = filename => filename.endsWith('.xva')
export const isXvaSumFile = filename => filename.endsWith('.xva.checksum')

View File

@@ -1,25 +1,25 @@
'use strict'
import { createLogger } from '@xen-orchestra/log'
import { catchGlobalErrors } from '@xen-orchestra/log/configure'
const logger = require('@xen-orchestra/log').createLogger('xo:backups:worker')
import Disposable from 'promise-toolbox/Disposable'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { compose } from '@vates/compose'
import { createCachedLookup } from '@vates/cached-dns.lookup'
import { createDebounceResource } from '@vates/disposable/debounceResource.js'
import { createRunner } from './Backup.mjs'
import { decorateMethodsWith } from '@vates/decorate-with'
import { deduped } from '@vates/disposable/deduped.js'
import { getHandler } from '@xen-orchestra/fs'
import { parseDuration } from '@vates/parse-duration'
import { Xapi } from '@xen-orchestra/xapi'
require('@xen-orchestra/log/configure').catchGlobalErrors(logger)
import { RemoteAdapter } from './RemoteAdapter.mjs'
import { Task } from './Task.mjs'
require('@vates/cached-dns.lookup').createCachedLookup().patchGlobal()
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { compose } = require('@vates/compose')
const { createDebounceResource } = require('@vates/disposable/debounceResource.js')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { deduped } = require('@vates/disposable/deduped.js')
const { getHandler } = require('@xen-orchestra/fs')
const { parseDuration } = require('@vates/parse-duration')
const { Xapi } = require('@xen-orchestra/xapi')
const { Backup } = require('./Backup.js')
const { RemoteAdapter } = require('./RemoteAdapter.js')
const { Task } = require('./Task.js')
createCachedLookup().patchGlobal()
const logger = createLogger('xo:backups:worker')
catchGlobalErrors(logger)
const { debug } = logger
class BackupWorker {
@@ -48,7 +48,7 @@ class BackupWorker {
}
run() {
return new Backup({
return createRunner({
config: this.#config,
getAdapter: remoteId => this.getAdapter(this.#remotes[remoteId]),
getConnectedRecord: Disposable.factory(async function* getConnectedRecord(type, uuid) {

View File

@@ -1,13 +1,11 @@
'use strict'
const cancelable = require('promise-toolbox/cancelable')
const CancelToken = require('promise-toolbox/CancelToken')
import cancelable from 'promise-toolbox/cancelable'
import CancelToken from 'promise-toolbox/CancelToken'
// Similar to `Promise.all` + `map` but pass a cancel token to the callback
//
// If any of the executions fails, the cancel token will be triggered and the
// first reason will be rejected.
exports.cancelableMap = cancelable(async function cancelableMap($cancelToken, iterable, callback) {
export const cancelableMap = cancelable(async function cancelableMap($cancelToken, iterable, callback) {
const { cancel, token } = CancelToken.source([$cancelToken])
try {
return await Promise.all(

View File

@@ -1,19 +1,19 @@
'use strict'
import test from 'test'
import { strict as assert } from 'node:assert'
const { beforeEach, afterEach, test, describe } = require('test')
const assert = require('assert').strict
import tmp from 'tmp'
import fs from 'fs-extra'
import * as uuid from 'uuid'
import { getHandler } from '@xen-orchestra/fs'
import { pFromCallback } from 'promise-toolbox'
import { RemoteAdapter } from './RemoteAdapter.mjs'
import { VHDFOOTER, VHDHEADER } from './tests.fixtures.mjs'
import { VhdFile, Constants, VhdDirectory, VhdAbstract } from 'vhd-lib'
import { checkAliases } from './_cleanVm.mjs'
import { dirname, basename } from 'node:path'
import { rimraf } from 'rimraf'
const tmp = require('tmp')
const fs = require('fs-extra')
const uuid = require('uuid')
const { getHandler } = require('@xen-orchestra/fs')
const { pFromCallback } = require('promise-toolbox')
const { RemoteAdapter } = require('./RemoteAdapter')
const { VHDFOOTER, VHDHEADER } = require('./tests.fixtures.js')
const { VhdFile, Constants, VhdDirectory, VhdAbstract } = require('vhd-lib')
const { checkAliases } = require('./_cleanVm')
const { dirname, basename } = require('path')
const { rimraf } = require('rimraf')
const { beforeEach, afterEach, describe } = test
let tempDir, adapter, handler, jobId, vdiId, basePath, relativePath
const rootPath = 'xo-vm-backups/VMUUID/'

View File

@@ -1,19 +1,18 @@
'use strict'
import * as UUID from 'uuid'
import sum from 'lodash/sum.js'
import { asyncMap } from '@xen-orchestra/async-map'
import { Constants, openVhd, VhdAbstract, VhdFile } from 'vhd-lib'
import { isVhdAlias, resolveVhdAlias } from 'vhd-lib/aliases.js'
import { dirname, resolve } from 'node:path'
import { isMetadataFile, isVhdFile, isXvaFile, isXvaSumFile } from './_backupType.mjs'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { mergeVhdChain } from 'vhd-lib/merge.js'
import { Task } from './Task.mjs'
import { Disposable } from 'promise-toolbox'
import handlerPath from '@xen-orchestra/fs/path'
const sum = require('lodash/sum')
const UUID = require('uuid')
const { asyncMap } = require('@xen-orchestra/async-map')
const { Constants, openVhd, VhdAbstract, VhdFile } = require('vhd-lib')
const { isVhdAlias, resolveVhdAlias } = require('vhd-lib/aliases')
const { dirname, resolve } = require('path')
const { DISK_TYPES } = Constants
const { isMetadataFile, isVhdFile, isXvaFile, isXvaSumFile } = require('./_backupType.js')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { mergeVhdChain } = require('vhd-lib/merge')
const { Task } = require('./Task.js')
const { Disposable } = require('promise-toolbox')
const handlerPath = require('@xen-orchestra/fs/path')
// checking the size of a vhd directory is costly
// 1 Http Query per 1000 blocks
@@ -117,7 +116,7 @@ const listVhds = async (handler, vmDir, logWarn) => {
return { vhds, interruptedVhds, aliases }
}
async function checkAliases(
export async function checkAliases(
aliasPaths,
targetDataRepository,
{ handler, logInfo = noop, logWarn = console.warn, remove = false }
@@ -176,11 +175,9 @@ async function checkAliases(
})
}
exports.checkAliases = checkAliases
const defaultMergeLimiter = limitConcurrency(1)
exports.cleanVm = async function cleanVm(
export async function cleanVm(
vmDir,
{
fixMetadata,

View File

@@ -1,8 +0,0 @@
'use strict'
const { utcFormat, utcParse } = require('d3-time-format')
// Format a date in ISO 8601 in a safe way to be used in filenames
// (even on Windows).
exports.formatFilenameDate = utcFormat('%Y%m%dT%H%M%SZ')
exports.parseFilenameDate = utcParse('%Y%m%dT%H%M%SZ')

View File

@@ -0,0 +1,6 @@
import { utcFormat, utcParse } from 'd3-time-format'
// Format a date in ISO 8601 in a safe way to be used in filenames
// (even on Windows).
export const formatFilenameDate = utcFormat('%Y%m%dT%H%M%SZ')
export const parseFilenameDate = utcParse('%Y%m%dT%H%M%SZ')

View File

@@ -1,6 +1,4 @@
'use strict'
// returns all entries but the last retention-th
exports.getOldEntries = function getOldEntries(retention, entries) {
export function getOldEntries(retention, entries) {
return entries === undefined ? [] : retention > 0 ? entries.slice(0, -retention) : entries
}

View File

@@ -1,13 +1,11 @@
'use strict'
const Disposable = require('promise-toolbox/Disposable')
const { join } = require('path')
const { mkdir, rmdir } = require('fs-extra')
const { tmpdir } = require('os')
import Disposable from 'promise-toolbox/Disposable'
import { join } from 'node:path'
import { mkdir, rmdir } from 'node:fs/promises'
import { tmpdir } from 'os'
const MAX_ATTEMPTS = 3
exports.getTmpDir = async function getTmpDir() {
export async function getTmpDir() {
for (let i = 0; true; ++i) {
const path = join(tmpdir(), Math.random().toString(36).slice(2))
try {

View File

@@ -1,8 +0,0 @@
'use strict'
const BACKUP_DIR = 'xo-vm-backups'
exports.BACKUP_DIR = BACKUP_DIR
exports.getVmBackupDir = function getVmBackupDir(uuid) {
return `${BACKUP_DIR}/${uuid}`
}

View File

@@ -0,0 +1,5 @@
export const BACKUP_DIR = 'xo-vm-backups'
export function getVmBackupDir(uuid) {
return `${BACKUP_DIR}/${uuid}`
}

View File

@@ -1,39 +1,30 @@
'use strict'
import groupBy from 'lodash/groupBy.js'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import omit from 'lodash/omit.js'
import { asyncMap } from '@xen-orchestra/async-map'
import { CancelToken } from 'promise-toolbox'
import { compareVersions } from 'compare-versions'
import { createVhdStreamWithLength } from 'vhd-lib'
import { defer } from 'golike-defer'
const find = require('lodash/find.js')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const omit = require('lodash/omit.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { CancelToken } = require('promise-toolbox')
const { compareVersions } = require('compare-versions')
const { createVhdStreamWithLength } = require('vhd-lib')
const { defer } = require('golike-defer')
import { cancelableMap } from './_cancelableMap.mjs'
import { Task } from './Task.mjs'
import pick from 'lodash/pick.js'
const { cancelableMap } = require('./_cancelableMap.js')
const { Task } = require('./Task.js')
const pick = require('lodash/pick.js')
// in `other_config` of an incrementally replicated VM, contains the UUID of the source VM
export const TAG_BASE_DELTA = 'xo:base_delta'
const TAG_BASE_DELTA = 'xo:base_delta'
exports.TAG_BASE_DELTA = TAG_BASE_DELTA
// in `other_config` of an incrementally replicated VM, contains the UUID of the target SR used for replication
//
// added after the complete replication
export const TAG_BACKUP_SR = 'xo:backup:sr'
const TAG_COPY_SRC = 'xo:copy_of'
exports.TAG_COPY_SRC = TAG_COPY_SRC
// in other_config of VDIs of an incrementally replicated VM, contains the UUID of the source VDI
export const TAG_COPY_SRC = 'xo:copy_of'
const ensureArray = value => (value === undefined ? [] : Array.isArray(value) ? value : [value])
const resolveUuid = async (xapi, cache, uuid, type) => {
if (uuid == null) {
return uuid
}
let ref = cache.get(uuid)
if (ref === undefined) {
ref = await xapi.call(`${type}.get_by_uuid`, uuid)
cache.set(uuid, ref)
}
return ref
}
exports.exportDeltaVm = async function exportDeltaVm(
export async function exportIncrementalVm(
vm,
baseVm,
{
@@ -43,6 +34,7 @@ exports.exportDeltaVm = async function exportDeltaVm(
fullVdisRequired = new Set(),
disableBaseTags = false,
preferNbd,
} = {}
) {
// refs of VM's VDIs → base's VDIs.
@@ -90,6 +82,7 @@ exports.exportDeltaVm = async function exportDeltaVm(
baseRef: baseVdi?.$ref,
cancelToken,
format: 'vhd',
preferNbd,
})
})
@@ -143,47 +136,21 @@ exports.exportDeltaVm = async function exportDeltaVm(
)
}
exports.importDeltaVm = defer(async function importDeltaVm(
export const importIncrementalVm = defer(async function importIncrementalVm(
$defer,
deltaVm,
incrementalVm,
sr,
{ cancelToken = CancelToken.none, detectBase = true, mapVdisSrs = {}, newMacAddresses = false } = {}
{ cancelToken = CancelToken.none, newMacAddresses = false } = {}
) {
const { version } = deltaVm
const { version } = incrementalVm
if (compareVersions(version, '1.0.0') < 0) {
throw new Error(`Unsupported delta backup version: ${version}`)
}
const vmRecord = deltaVm.vm
const vmRecord = incrementalVm.vm
const xapi = sr.$xapi
let baseVm
if (detectBase) {
const remoteBaseVmUuid = vmRecord.other_config[TAG_BASE_DELTA]
if (remoteBaseVmUuid) {
baseVm = find(xapi.objects.all, obj => (obj = obj.other_config) && obj[TAG_COPY_SRC] === remoteBaseVmUuid)
if (!baseVm) {
throw new Error(`could not find the base VM (copy of ${remoteBaseVmUuid})`)
}
}
}
const cache = new Map()
const mapVdisSrRefs = {}
for (const [vdiUuid, srUuid] of Object.entries(mapVdisSrs)) {
mapVdisSrRefs[vdiUuid] = await resolveUuid(xapi, cache, srUuid, 'SR')
}
const baseVdis = {}
baseVm &&
baseVm.$VBDs.forEach(vbd => {
const vdi = vbd.$VDI
if (vdi !== undefined) {
baseVdis[vbd.VDI] = vbd.$VDI
}
})
const vdiRecords = deltaVm.vdis
const vdiRecords = incrementalVm.vdis
// 0. Create suspend_VDI
let suspendVdi
@@ -194,18 +161,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
vm: pick(vmRecord, 'uuid', 'name_label', 'suspend_VDI'),
})
} else {
suspendVdi = await xapi.getRecord(
'VDI',
await xapi.VDI_create({
...vdi,
other_config: {
...vdi.other_config,
[TAG_BASE_DELTA]: undefined,
[TAG_COPY_SRC]: vdi.uuid,
},
sr: mapVdisSrRefs[vdi.uuid] ?? sr.$ref,
})
)
suspendVdi = await xapi.getRecord('VDI', await xapi.VDI_create(vdi))
$defer.onFailure(() => suspendVdi.$destroy())
}
}
@@ -223,10 +179,6 @@ exports.importDeltaVm = defer(async function importDeltaVm(
ha_always_run: false,
is_a_template: false,
name_label: '[Importing…] ' + vmRecord.name_label,
other_config: {
...vmRecord.other_config,
[TAG_COPY_SRC]: vmRecord.uuid,
},
},
{
bios_strings: vmRecord.bios_strings,
@@ -240,21 +192,15 @@ exports.importDeltaVm = defer(async function importDeltaVm(
await asyncMap(await xapi.getField('VM', vmRef, 'VBDs'), ref => ignoreErrors.call(xapi.call('VBD.destroy', ref)))
// 3. Create VDIs & VBDs.
const vbdRecords = deltaVm.vbds
const vbdRecords = incrementalVm.vbds
const vbds = groupBy(vbdRecords, 'VDI')
const newVdis = {}
await asyncMap(Object.keys(vdiRecords), async vdiRef => {
const vdi = vdiRecords[vdiRef]
let newVdi
const remoteBaseVdiUuid = detectBase && vdi.other_config[TAG_BASE_DELTA]
if (remoteBaseVdiUuid) {
const baseVdi = find(baseVdis, vdi => vdi.other_config[TAG_COPY_SRC] === remoteBaseVdiUuid)
if (!baseVdi) {
throw new Error(`missing base VDI (copy of ${remoteBaseVdiUuid})`)
}
newVdi = await xapi.getRecord('VDI', await baseVdi.$clone())
if (vdi.baseVdi !== undefined) {
newVdi = await xapi.getRecord('VDI', await vdi.baseVdi.$clone())
$defer.onFailure(() => newVdi.$destroy())
await newVdi.update_other_config(TAG_COPY_SRC, vdi.uuid)
@@ -265,18 +211,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
// suspendVDI has already created
newVdi = suspendVdi
} else {
newVdi = await xapi.getRecord(
'VDI',
await xapi.VDI_create({
...vdi,
other_config: {
...vdi.other_config,
[TAG_BASE_DELTA]: undefined,
[TAG_COPY_SRC]: vdi.uuid,
},
SR: mapVdisSrRefs[vdi.uuid] ?? sr.$ref,
})
)
newVdi = await xapi.getRecord('VDI', await xapi.VDI_create(vdi))
$defer.onFailure(() => newVdi.$destroy())
}
@@ -309,7 +244,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}
})
const { streams } = deltaVm
const { streams } = incrementalVm
await Promise.all([
// Import VDI contents.
@@ -321,12 +256,14 @@ exports.importDeltaVm = defer(async function importDeltaVm(
if (stream.length === undefined) {
stream = await createVhdStreamWithLength(stream)
}
await xapi.setField('VDI', vdi.$ref, 'name_label', `[Importing] ${vdiRecords[id].name_label}`)
await vdi.$importContent(stream, { cancelToken, format: 'vhd' })
await xapi.setField('VDI', vdi.$ref, 'name_label', vdiRecords[id].name_label)
}
}),
// Create VIFs.
asyncMap(Object.values(deltaVm.vifs), vif => {
asyncMap(Object.values(incrementalVm.vifs), vif => {
let network = vif.$network$uuid && xapi.getObjectByUuid(vif.$network$uuid, undefined)
if (network === undefined) {
@@ -358,8 +295,8 @@ exports.importDeltaVm = defer(async function importDeltaVm(
])
await Promise.all([
deltaVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', deltaVm.vm.name_label),
incrementalVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', incrementalVm.vm.name_label),
])
return vmRef

View File

@@ -1,6 +1,4 @@
'use strict'
const assert = require('assert')
import assert from 'node:assert'
const COMPRESSED_MAGIC_NUMBERS = [
// https://tools.ietf.org/html/rfc1952.html#page-5
@@ -47,7 +45,7 @@ const isValidTar = async (handler, size, fd) => {
}
// TODO: find an heuristic for compressed files
async function isValidXva(path) {
export async function isValidXva(path) {
const handler = this._handler
// size is longer when encrypted + reading part of an encrypted file is not implemented
@@ -74,6 +72,5 @@ async function isValidXva(path) {
return true
}
}
exports.isValidXva = isValidXva
const noop = Function.prototype

View File

@@ -1,9 +1,7 @@
'use strict'
const fromCallback = require('promise-toolbox/fromCallback')
const { createLogger } = require('@xen-orchestra/log')
const { createParser } = require('parse-pairs')
const { execFile } = require('child_process')
import fromCallback from 'promise-toolbox/fromCallback'
import { createLogger } from '@xen-orchestra/log'
import { createParser } from 'parse-pairs'
import { execFile } from 'child_process'
const { debug } = createLogger('xo:backups:listPartitions')
@@ -24,8 +22,7 @@ const IGNORED_PARTITION_TYPES = {
0x82: true, // swap
}
const LVM_PARTITION_TYPE = 0x8e
exports.LVM_PARTITION_TYPE = LVM_PARTITION_TYPE
export const LVM_PARTITION_TYPE = 0x8e
const parsePartxLine = createParser({
keyTransform: key => (key === 'UUID' ? 'id' : key.toLowerCase()),
@@ -33,7 +30,7 @@ const parsePartxLine = createParser({
})
// returns an empty array in case of a non-partitioned disk
exports.listPartitions = async function listPartitions(devicePath) {
export async function listPartitions(devicePath) {
const parts = await fromCallback(execFile, 'partx', [
'--bytes',
'--output=NR,START,SIZE,NAME,UUID,TYPE',

View File

@@ -1,8 +1,6 @@
'use strict'
const fromCallback = require('promise-toolbox/fromCallback')
const { createParser } = require('parse-pairs')
const { execFile } = require('child_process')
import fromCallback from 'promise-toolbox/fromCallback'
import { createParser } from 'parse-pairs'
import { execFile } from 'child_process'
// ===================================================================
@@ -29,5 +27,5 @@ const makeFunction =
.map(Array.isArray(fields) ? parse : line => parse(line)[fields])
}
exports.lvs = makeFunction('lvs')
exports.pvs = makeFunction('pvs')
export const lvs = makeFunction('lvs')
export const pvs = makeFunction('pvs')

View File

@@ -0,0 +1,132 @@
import { asyncMap } from '@xen-orchestra/async-map'
import Disposable from 'promise-toolbox/Disposable'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { PoolMetadataBackup } from './_PoolMetadataBackup.mjs'
import { XoMetadataBackup } from './_XoMetadataBackup.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
export const Metadata = class MetadataBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
}

View File

@@ -0,0 +1,96 @@
import { asyncMapSettled } from '@xen-orchestra/async-map'
import Disposable from 'promise-toolbox/Disposable'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { Task } from '../Task.mjs'
import createStreamThrottle from './_createStreamThrottle.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
import { FullRemote } from './_vmRunners/FullRemote.mjs'
import { IncrementalRemote } from './_vmRunners/IncrementalRemote.mjs'
const DEFAULT_REMOTE_VM_SETTINGS = {
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
timeout: 0,
validateVhdStreams: false,
vmTimeout: 0,
}
export const VmsRemote = class RemoteVmsBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_REMOTE_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
() => this._getAdapter(job.sourceRemote),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
Disposable.all(
extractIdsFromSimplePattern(job.remotes).map(id => id !== job.sourceRemote && this._getAdapter(id))
),
async ({ adapter: sourceRemoteAdapter }, healthCheckSr, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => !!_)
if (remoteAdapters.length === 0) {
return
}
const vmsUuids = await sourceRemoteAdapter.listAllVms()
Task.info('vms', { vms: vmsUuids })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
const opts = {
baseSettings,
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vmUuid] },
sourceRemoteAdapter,
throttleStream,
vmUuid,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalRemote(opts)
} else if (job.mode === 'full') {
vmBackup = new FullRemote(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented for mirror backup`)
}
return runTask(taskStart, () => vmBackup.run())
}
const { concurrency } = settings
await asyncMapSettled(vmsUuids, !concurrency ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,137 @@
import { asyncMapSettled } from '@xen-orchestra/async-map'
import Disposable from 'promise-toolbox/Disposable'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { Task } from '../Task.mjs'
import createStreamThrottle from './_createStreamThrottle.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
import { IncrementalXapi } from './_vmRunners/IncrementalXapi.mjs'
import { FullXapi } from './_vmRunners/FullXapi.mjs'
const DEFAULT_XAPI_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
diskPerVmConcurrency: 0, // not limited by default
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
export const VmsXapi = class VmsXapiBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_XAPI_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () => {
const opts = {
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalXapi(opts)
} else {
if (job.mode === 'full') {
vmBackup = new FullXapi(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented`)
}
}
return vmBackup.run()
})
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,49 @@
import Disposable from 'promise-toolbox/Disposable'
import pTimeout from 'promise-toolbox/timeout'
import { compileTemplate } from '@xen-orchestra/template'
import { runTask } from './_runTask.mjs'
import { RemoteTimeoutError } from './_RemoteTimeoutError.mjs'
export const DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
export const Abstract = class AbstractRunner {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const baseSettings = this._computeBaseSettings(config, job)
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
}

View File

@@ -1,16 +1,13 @@
'use strict'
import { asyncMap } from '@xen-orchestra/async-map'
const { asyncMap } = require('@xen-orchestra/async-map')
import { DIR_XO_POOL_METADATA_BACKUPS } from '../RemoteAdapter.mjs'
import { forkStreamUnpipe } from './_forkStreamUnpipe.mjs'
import { formatFilenameDate } from '../_filenameDate.mjs'
import { Task } from '../Task.mjs'
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
export const PATH_DB_DUMP = '/pool/xmldbdump'
const PATH_DB_DUMP = '/pool/xmldbdump'
exports.PATH_DB_DUMP = PATH_DB_DUMP
exports.PoolMetadataBackup = class PoolMetadataBackup {
export class PoolMetadataBackup {
constructor({ config, job, pool, remoteAdapters, schedule, settings }) {
this._config = config
this._job = job

View File

@@ -0,0 +1,6 @@
export class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}

View File

@@ -1,12 +1,11 @@
'use strict'
import { asyncMap } from '@xen-orchestra/async-map'
import { join } from '@xen-orchestra/fs/path'
const { asyncMap } = require('@xen-orchestra/async-map')
import { DIR_XO_CONFIG_BACKUPS } from '../RemoteAdapter.mjs'
import { formatFilenameDate } from '../_filenameDate.mjs'
import { Task } from '../Task.mjs'
const { DIR_XO_CONFIG_BACKUPS } = require('./RemoteAdapter.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
exports.XoMetadataBackup = class XoMetadataBackup {
export class XoMetadataBackup {
constructor({ config, job, remoteAdapters, schedule, settings }) {
this._config = config
this._job = job
@@ -23,10 +22,17 @@ exports.XoMetadataBackup = class XoMetadataBackup {
const dir = `${scheduleDir}/${formatFilenameDate(timestamp)}`
const data = job.xoMetadata
const fileName = `${dir}/data.json`
let dataBaseName = './data'
// JSON data is sent as plain string, binary data is sent as an object with `data` and `encoding properties
const isJson = typeof data === 'string'
if (isJson) {
dataBaseName += '.json'
}
const metadata = JSON.stringify(
{
data: dataBaseName,
jobId: job.id,
jobName: job.name,
scheduleId: schedule.id,
@@ -36,6 +42,8 @@ exports.XoMetadataBackup = class XoMetadataBackup {
null,
2
)
const dataFileName = join(dir, dataBaseName)
const metaDataFileName = `${dir}/metadata.json`
await asyncMap(
@@ -52,7 +60,7 @@ exports.XoMetadataBackup = class XoMetadataBackup {
async () => {
const handler = adapter.handler
const dirMode = this._config.dirMode
await handler.outputFile(fileName, data, { dirMode })
await handler.outputFile(dataFileName, isJson ? data : Buffer.from(data.data, data.encoding), { dirMode })
await handler.outputFile(metaDataFileName, metadata, {
dirMode,
})

View File

@@ -1,12 +1,10 @@
'use strict'
const { pipeline } = require('node:stream')
const { ThrottleGroup } = require('@kldzj/stream-throttle')
const identity = require('lodash/identity.js')
import { pipeline } from 'node:stream'
import { ThrottleGroup } from '@kldzj/stream-throttle'
import identity from 'lodash/identity.js'
const noop = Function.prototype
module.exports = function createStreamThrottle(rate) {
export default function createStreamThrottle(rate) {
if (rate === 0) {
return identity
}

View File

@@ -1,14 +1,13 @@
'use strict'
import { createLogger } from '@xen-orchestra/log'
import { finished, PassThrough } from 'node:stream'
const { finished, PassThrough } = require('node:stream')
const { debug } = require('@xen-orchestra/log').createLogger('xo:backups:forkStreamUnpipe')
const { debug } = createLogger('xo:backups:forkStreamUnpipe')
// create a new readable stream from an existing one which may be piped later
//
// in case of error in the new readable stream, it will simply be unpiped
// from the original one
exports.forkStreamUnpipe = function forkStreamUnpipe(source) {
export function forkStreamUnpipe(source) {
const { forks = 0 } = source
source.forks = forks + 1

View File

@@ -0,0 +1,7 @@
export function getAdaptersByRemote(adapters) {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}

View File

@@ -0,0 +1,5 @@
import { Task } from '../Task.mjs'
const noop = Function.prototype
export const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs

View File

@@ -0,0 +1,52 @@
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { AbstractRemote } from './_AbstractRemote.mjs'
import { FullRemoteWriter } from '../_writers/FullRemoteWriter.mjs'
import { forkStreamUnpipe } from '../_forkStreamUnpipe.mjs'
import { watchStreamSize } from '../../_watchStreamSize.mjs'
import { Task } from '../../Task.mjs'
export const FullRemote = class FullRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return FullRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'full')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
const stream = await this._sourceRemoteAdapter.readFullVmBackup(metadata)
const sizeContainer = watchStreamSize(stream)
// @todo shouldn't transfer backup if it will be deleted by retention policy (higher retention on source than destination)
await this._callWriters(
writer =>
writer.run({
stream: forkStreamUnpipe(stream),
// stream will be forked and transformed, it's not safe to attach additionnal properties to it
streamLength: stream.length,
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
sizeContainer,
}),
'writer.run()'
)
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
decorateMethodsWith(FullRemote, {
_run: defer,
})

View File

@@ -0,0 +1,75 @@
import { createLogger } from '@xen-orchestra/log'
import { forkStreamUnpipe } from '../_forkStreamUnpipe.mjs'
import { FullRemoteWriter } from '../_writers/FullRemoteWriter.mjs'
import { FullXapiWriter } from '../_writers/FullXapiWriter.mjs'
import { watchStreamSize } from '../../_watchStreamSize.mjs'
import { AbstractXapi } from './_AbstractXapi.mjs'
const { debug } = createLogger('xo:backups:FullXapiVmBackup')
export const FullXapi = class FullXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [FullRemoteWriter, FullXapiWriter]
}
_mustDoSnapshot() {
const vm = this._vm
const settings = this._settings
return (
settings.unconditionalSnapshot ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
)
}
_selectBaseVm() {}
async _copy() {
const { compression } = this.job
const vm = this._vm
const exportedVm = this._exportedVm
const stream = this._throttleStream(
await this._xapi.VM_export(exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const vdis = await exportedVm.$getDisks()
let maxStreamLength = 1024 * 1024 // Ovf file and tar headers are a few KB, let's stay safe
for (const vdiRef of vdis) {
const vdi = await this._xapi.getRecord('VDI', vdiRef)
// the size a of fully allocated vdi will be virtual_size exaclty, it's a gross over evaluation
// of the real stream size in general, since a disk is never completly full
// vdi.physical_size seems to underevaluate a lot the real disk usage of a VDI, as of 2023-10-30
maxStreamLength += vdi.virtual_size
}
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
maxStreamLength,
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
}

View File

@@ -0,0 +1,66 @@
import { asyncEach } from '@vates/async-each'
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import assert from 'node:assert'
import isVhdDifferencingDisk from 'vhd-lib/isVhdDifferencingDisk.js'
import mapValues from 'lodash/mapValues.js'
import { AbstractRemote } from './_AbstractRemote.mjs'
import { forkDeltaExport } from './_forkDeltaExport.mjs'
import { IncrementalRemoteWriter } from '../_writers/IncrementalRemoteWriter.mjs'
import { Task } from '../../Task.mjs'
class IncrementalRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return IncrementalRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'delta')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
assert.strictEqual(metadata.mode, 'delta')
await this._callWriters(writer => writer.prepare({ isBase: metadata.isBase }), 'writer.prepare()')
const incrementalExport = await this._sourceRemoteAdapter.readIncrementalVmBackup(metadata, undefined, {
useChain: false,
})
const differentialVhds = {}
await asyncEach(Object.entries(incrementalExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
incrementalExport.streams = mapValues(incrementalExport.streams, this._throttleStream)
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(incrementalExport),
differentialVhds,
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
}),
'writer.transfer()'
)
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
export const IncrementalRemote = IncrementalRemoteVmBackupRunner
decorateMethodsWith(IncrementalRemoteVmBackupRunner, {
_run: defer,
})

View File

@@ -0,0 +1,174 @@
import { asyncEach } from '@vates/async-each'
import { asyncMap } from '@xen-orchestra/async-map'
import { createLogger } from '@xen-orchestra/log'
import { pipeline } from 'node:stream'
import findLast from 'lodash/findLast.js'
import isVhdDifferencingDisk from 'vhd-lib/isVhdDifferencingDisk.js'
import keyBy from 'lodash/keyBy.js'
import mapValues from 'lodash/mapValues.js'
import vhdStreamValidator from 'vhd-lib/vhdStreamValidator.js'
import { AbstractXapi } from './_AbstractXapi.mjs'
import { exportIncrementalVm } from '../../_incrementalVm.mjs'
import { forkDeltaExport } from './_forkDeltaExport.mjs'
import { IncrementalRemoteWriter } from '../_writers/IncrementalRemoteWriter.mjs'
import { IncrementalXapiWriter } from '../_writers/IncrementalXapiWriter.mjs'
import { Task } from '../../Task.mjs'
import { watchStreamSize } from '../../_watchStreamSize.mjs'
const { debug } = createLogger('xo:backups:IncrementalXapiVmBackup')
const noop = Function.prototype
export const IncrementalXapi = class IncrementalXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [IncrementalRemoteWriter, IncrementalXapiWriter]
}
_mustDoSnapshot() {
return true
}
async _copy() {
const baseVm = this._baseVm
const vm = this._vm
const exportedVm = this._exportedVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportIncrementalVm(exportedVm, baseVm, {
fullVdisRequired,
preferNbd: this._settings.preferNbd,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const differentialVhds = {}
// since isVhdDifferencingDisk is reading and unshifting data in stream
// it should be done BEFORE any other stream transform
await asyncEach(Object.entries(deltaExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
differentialVhds,
sizeContainers,
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this._vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
}

View File

@@ -0,0 +1,93 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { createLogger } from '@xen-orchestra/log'
import { Task } from '../../Task.mjs'
const { debug, warn } = createLogger('xo:backups:AbstractVmRunner')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
export const Abstract = class AbstractVmBackupRunner {
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const tags = this._tags
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
// create a task to have an info in the logs and reports
return Task.run(
{
name: 'health check',
},
() => {
Task.info(`This VM doesn't match the health check's tags for this schedule`)
}
)
}
await this._callWriters(writer => writer.healthCheck(), 'writer.healthCheck()')
}
}

View File

@@ -0,0 +1,99 @@
import { asyncEach } from '@vates/async-each'
import { Disposable } from 'promise-toolbox'
import { getVmBackupDir } from '../../_getVmBackupDir.mjs'
import { Abstract } from './_Abstract.mjs'
import { extractIdsFromSimplePattern } from '../../extractIdsFromSimplePattern.mjs'
export const AbstractRemote = class AbstractRemoteVmBackupRunner extends Abstract {
constructor({
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings,
sourceRemoteAdapter,
throttleStream,
vmUuid,
}) {
super()
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
this._healthCheckSr = healthCheckSr
this._sourceRemoteAdapter = sourceRemoteAdapter
this._throttleStream = throttleStream
this._vmUuid = vmUuid
const allSettings = job.settings
const writers = new Set()
this._writers = writers
const RemoteWriter = this._getRemoteWriter()
extractIdsFromSimplePattern(job.remotes).forEach(remoteId => {
const adapter = remoteAdapters[remoteId]
const targetSettings = {
...settings,
...allSettings[remoteId],
}
writers.add(
new RemoteWriter({
adapter,
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid,
remoteId,
settings: targetSettings,
})
)
})
}
async _computeTransferList(predicate) {
const vmBackups = await this._sourceRemoteAdapter.listVmBackups(this._vmUuid, predicate)
const localMetada = new Map()
Object.values(vmBackups).forEach(metadata => {
const timestamp = metadata.timestamp
localMetada.set(timestamp, metadata)
})
const nbRemotes = Object.keys(this.remoteAdapters).length
const remoteMetadatas = {}
await asyncEach(Object.values(this.remoteAdapters), async remoteAdapter => {
const remoteMetadata = await remoteAdapter.listVmBackups(this._vmUuid, predicate)
remoteMetadata.forEach(metadata => {
const timestamp = metadata.timestamp
remoteMetadatas[timestamp] = (remoteMetadatas[timestamp] ?? 0) + 1
})
})
let chain = []
const timestamps = [...localMetada.keys()]
timestamps.sort()
for (const timestamp of timestamps) {
if (remoteMetadatas[timestamp] !== nbRemotes) {
// this backup is not present in all the remote
// should be retransfered if not found later
chain.push(localMetada.get(timestamp))
} else {
// backup is present in local and remote : the chain has already been transferred
chain = []
}
}
return chain
}
async run() {
const handler = this._sourceRemoteAdapter._handler
await Disposable.use(await handler.lock(getVmBackupDir(this._vmUuid)), async () => {
await this._run()
await this._healthCheck()
})
}
}

View File

@@ -0,0 +1,288 @@
import assert from 'node:assert'
import groupBy from 'lodash/groupBy.js'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { asyncMap } from '@xen-orchestra/async-map'
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { formatDateTime } from '@xen-orchestra/xapi'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
import { Abstract } from './_Abstract.mjs'
export const AbstractXapi = class AbstractXapiVmBackupRunner extends Abstract {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
super()
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
const currentOperations = Object.values(vm.current_operations)
if (currentOperations.some(_ => _ === 'migrate_send' || _ === 'pool_migrate')) {
throw new Error('cannot backup a VM currently being migrated')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
const tags = (this._tags = vm.tags)
// VM (snapshot) that is really exported
this._exportedVm = undefined
this._vm = vm
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isIncremental = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._getWriters()
const allSettings = job.settings
Object.entries(remoteAdapters).forEach(([remoteId, adapter]) => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(
new BackupWriter({
adapter,
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid: vm.uuid,
remoteId,
settings: targetSettings,
})
)
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(
new ReplicationWriter({
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid: vm.uuid,
sr,
settings: targetSettings,
})
)
}
})
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const vm = this._vm
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const vm = this._vm
const xapi = this._xapi
const settings = this._settings
if (this._mustDoSnapshot()) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this._exportedVm = await xapi.getRecord('VM', snapshotRef)
return this._exportedVm.uuid
})
} else {
this._exportedVm = vm
this.timestamp = Date.now()
}
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this._vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this._vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async copy() {
throw new Error('Not implemented')
}
_getWriters() {
throw new Error('Not implemented')
}
_mustDoSnapshot() {
throw new Error('Not implemented')
}
async _selectBaseVm() {
throw new Error('Not implemented')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
await this._selectBaseVm()
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const vm = this._vm
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
const { pool_migrate = null, migrate_send = null } = this._exportedVm.blocked_operations
const reason = 'VM migration is blocked during backup'
await this._exportedVm.update_blocked_operations({ pool_migrate: reason, migrate_send: reason })
try {
await this._copy()
} finally {
await this._exportedVm.update_blocked_operations({ pool_migrate, migrate_send })
}
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
decorateMethodsWith(AbstractXapi, {
run: defer,
})

View File

@@ -0,0 +1,11 @@
import cloneDeep from 'lodash/cloneDeep.js'
import mapValues from 'lodash/mapValues.js'
import { forkStreamUnpipe } from '../_forkStreamUnpipe.mjs'
export function forkDeltaExport(deltaExport) {
const { streams, ...rest } = deltaExport
const newMetadata = cloneDeep(rest)
newMetadata.streams = mapValues(streams, forkStreamUnpipe)
return newMetadata
}

View File

@@ -1,13 +1,11 @@
'use strict'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
import { MixinRemoteWriter } from './_MixinRemoteWriter.mjs'
import { AbstractFullWriter } from './_AbstractFullWriter.mjs'
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(AbstractFullWriter) {
export class FullRemoteWriter extends MixinRemoteWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -26,15 +24,17 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
)
}
async _run({ timestamp, sizeContainer, stream }) {
const backup = this._backup
async _run({ maxStreamLength, timestamp, sizeContainer, stream, streamLength, vm, vmSnapshot }) {
const settings = this._settings
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const adapter = this._adapter
// TODO: clean VM backup directory
let metadata = await this._isAlreadyTransferred(timestamp)
if (metadata !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const oldBackups = getOldEntries(
settings.exportRetention - 1,
@@ -47,14 +47,14 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
const dataBasename = basename + '.xva'
const dataFilename = this._vmBackupDir + '/' + dataBasename
const metadata = {
metadata = {
jobId: job.id,
mode: job.mode,
scheduleId,
timestamp,
version: '2.0.0',
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
xva: './' + dataBasename,
}
@@ -65,6 +65,8 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
await Task.run({ name: 'transfer' }, async () => {
await adapter.outputStream(dataFilename, stream, {
maxStreamLength,
streamLength,
validator: tmpPath => adapter.isValidXva(tmpPath),
})
return { size: sizeContainer.size }

View File

@@ -1,18 +1,16 @@
'use strict'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { asyncMap, asyncMapSettled } from '@xen-orchestra/async-map'
import { formatDateTime } from '@xen-orchestra/xapi'
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { formatDateTime } = require('@xen-orchestra/xapi')
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
import { AbstractFullWriter } from './_AbstractFullWriter.mjs'
import { MixinXapiWriter } from './_MixinXapiWriter.mjs'
import { listReplicatedVms } from './_listReplicatedVms.mjs'
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
const { MixinReplicationWriter } = require('./_MixinReplicationWriter.js')
const { listReplicatedVms } = require('./_listReplicatedVms.js')
exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplicationWriter(AbstractFullWriter) {
export class FullXapiWriter extends MixinXapiWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -32,10 +30,11 @@ exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplica
)
}
async _run({ timestamp, sizeContainer, stream }) {
async _run({ timestamp, sizeContainer, stream, vm }) {
const sr = this._sr
const settings = this._settings
const { job, scheduleId, vm } = this._backup
const job = this._job
const scheduleId = this._scheduleId
const { uuid: srUuid, $xapi: xapi } = sr

View File

@@ -1,35 +1,33 @@
'use strict'
import assert from 'node:assert'
import mapValues from 'lodash/mapValues.js'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { asyncEach } from '@vates/async-each'
import { asyncMap } from '@xen-orchestra/async-map'
import { chainVhd, checkVhdChain, openVhd, VhdAbstract } from 'vhd-lib'
import { createLogger } from '@xen-orchestra/log'
import { decorateClass } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { dirname } from 'node:path'
const assert = require('assert')
const map = require('lodash/map.js')
const mapValues = require('lodash/mapValues.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap } = require('@xen-orchestra/async-map')
const { chainVhd, checkVhdChain, openVhd, VhdAbstract } = require('vhd-lib')
const { createLogger } = require('@xen-orchestra/log')
const { decorateClass } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { dirname } = require('path')
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { TAG_BASE_DELTA } from '../../_incrementalVm.mjs'
import { Task } from '../../Task.mjs'
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { checkVhd } = require('./_checkVhd.js')
const { packUuid } = require('./_packUuid.js')
const { Disposable } = require('promise-toolbox')
import { MixinRemoteWriter } from './_MixinRemoteWriter.mjs'
import { AbstractIncrementalWriter } from './_AbstractIncrementalWriter.mjs'
import { checkVhd } from './_checkVhd.mjs'
import { packUuid } from './_packUuid.mjs'
import { Disposable } from 'promise-toolbox'
const { warn } = createLogger('xo:backups:DeltaBackupWriter')
class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi) {
const { handler } = this._adapter
const backup = this._backup
const adapter = this._adapter
const vdisDir = `${this._vmBackupDir}/vdis/${backup.job.id}`
const vdisDir = `${this._vmBackupDir}/vdis/${this._job.id}`
await asyncMap(baseUuidToSrcVdi, async ([baseUuid, srcVdi]) => {
let found = false
@@ -91,11 +89,12 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
async _prepare() {
const adapter = this._adapter
const settings = this._settings
const { scheduleId, vm } = this._backup
const scheduleId = this._scheduleId
const vmUuid = this._vmUuid
const oldEntries = getOldEntries(
settings.exportRetention - 1,
await adapter.listVmBackups(vm.uuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
await adapter.listVmBackups(vmUuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
)
this._oldEntries = oldEntries
@@ -134,16 +133,19 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}
}
async _transfer($defer, { timestamp, deltaExport }) {
async _transfer($defer, { differentialVhds, timestamp, deltaExport, vm, vmSnapshot }) {
const adapter = this._adapter
const backup = this._backup
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const settings = this._settings
const jobId = job.id
const handler = adapter.handler
// TODO: clean VM backup directory
let metadataContent = await this._isAlreadyTransferred(timestamp)
if (metadataContent !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const basename = formatFilenameDate(timestamp)
const vhds = mapValues(
@@ -158,7 +160,7 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}/${adapter.getVhdFileName(basename)}`
)
const metadataContent = {
metadataContent = {
jobId,
mode: job.mode,
scheduleId,
@@ -169,16 +171,16 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
vifs: deltaExport.vifs,
vhds,
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
}
const { size } = await Task.run({ name: 'transfer' }, async () => {
let transferSize = 0
await Promise.all(
map(deltaExport.vdis, async (vdi, id) => {
await asyncEach(
Object.entries(deltaExport.vdis),
async ([id, vdi]) => {
const path = `${this._vmBackupDir}/${vhds[id]}`
const isDelta = vdi.other_config['xo:base_delta'] !== undefined
const isDelta = differentialVhds[`${id}.vhd`]
let parentPath
if (isDelta) {
const vdiDir = dirname(path)
@@ -191,7 +193,11 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
.sort()
.pop()
assert.notStrictEqual(parentPath, undefined, `missing parent of ${id}`)
assert.notStrictEqual(
parentPath,
undefined,
`missing parent of ${id} in ${dirname(path)}, looking for ${vdi.other_config[TAG_BASE_DELTA]}`
)
parentPath = parentPath.slice(1) // remove leading slash
@@ -204,7 +210,7 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// merges and chainings
checksum: false,
validator: tmpPath => checkVhd(handler, tmpPath),
writeBlockConcurrency: this._backup.config.writeBlockConcurrency,
writeBlockConcurrency: this._config.writeBlockConcurrency,
})
if (isDelta) {
@@ -217,8 +223,12 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
await vhd.readBlockAllocationTable() // required by writeFooter()
await vhd.writeFooter()
})
})
},
{
concurrency: settings.diskPerVmConcurrency,
}
)
return { size: transferSize }
})
metadataContent.size = size
@@ -227,6 +237,6 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// TODO: run cleanup?
}
}
exports.DeltaBackupWriter = decorateClass(DeltaBackupWriter, {
decorateClass(IncrementalRemoteWriter, {
_transfer: defer,
})

View File

@@ -0,0 +1,174 @@
import { asyncMap, asyncMapSettled } from '@xen-orchestra/async-map'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { formatDateTime } from '@xen-orchestra/xapi'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { importIncrementalVm, TAG_BACKUP_SR, TAG_BASE_DELTA, TAG_COPY_SRC } from '../../_incrementalVm.mjs'
import { Task } from '../../Task.mjs'
import { AbstractIncrementalWriter } from './_AbstractIncrementalWriter.mjs'
import { MixinXapiWriter } from './_MixinXapiWriter.mjs'
import { listReplicatedVms } from './_listReplicatedVms.mjs'
import find from 'lodash/find.js'
export class IncrementalXapiWriter extends MixinXapiWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi, baseVm) {
const sr = this._sr
const replicatedVm = listReplicatedVms(sr.$xapi, this._job.id, sr.uuid, this._vmUuid).find(
vm => vm.other_config[TAG_COPY_SRC] === baseVm.uuid
)
if (replicatedVm === undefined) {
return baseUuidToSrcVdi.clear()
}
const xapi = replicatedVm.$xapi
const replicatedVdis = new Set(
await asyncMap(await replicatedVm.$getDisks(), async vdiRef => {
const otherConfig = await xapi.getField('VDI', vdiRef, 'other_config')
return otherConfig[TAG_COPY_SRC]
})
)
for (const uuid of baseUuidToSrcVdi.keys()) {
if (!replicatedVdis.has(uuid)) {
baseUuidToSrcVdi.delete(uuid)
}
}
}
prepare({ isFull }) {
// create the task related to this export and ensure all methods are called in this context
const task = new Task({
name: 'export',
data: {
id: this._sr.uuid,
isFull,
name_label: this._sr.name_label,
type: 'SR',
},
})
const hasHealthCheckSr = this._healthCheckSr !== undefined
this.transfer = task.wrapFn(this.transfer)
this.cleanup = task.wrapFn(this.cleanup, !hasHealthCheckSr)
this.healthCheck = task.wrapFn(this.healthCheck, hasHealthCheckSr)
return task.run(() => this._prepare())
}
async _prepare() {
const settings = this._settings
const { uuid: srUuid, $xapi: xapi } = this._sr
const vmUuid = this._vmUuid
const scheduleId = this._scheduleId
// delete previous interrupted copies
ignoreErrors.call(asyncMapSettled(listReplicatedVms(xapi, scheduleId, undefined, vmUuid), vm => vm.$destroy))
this._oldEntries = getOldEntries(settings.copyRetention - 1, listReplicatedVms(xapi, scheduleId, srUuid, vmUuid))
if (settings.deleteFirst) {
await this._deleteOldEntries()
}
}
async cleanup() {
if (!this._settings.deleteFirst) {
await this._deleteOldEntries()
}
}
async _deleteOldEntries() {
return asyncMapSettled(this._oldEntries, vm => vm.$destroy())
}
#decorateVmMetadata(backup) {
const { _warmMigration } = this._settings
const sr = this._sr
const xapi = sr.$xapi
const vm = backup.vm
vm.other_config[TAG_COPY_SRC] = vm.uuid
const remoteBaseVmUuid = vm.other_config[TAG_BASE_DELTA]
let baseVm
if (remoteBaseVmUuid) {
baseVm = find(
xapi.objects.all,
obj => (obj = obj.other_config) && obj[TAG_COPY_SRC] === remoteBaseVmUuid && obj[TAG_BACKUP_SR] === sr.$id
)
if (!baseVm) {
throw new Error(`could not find the base VM (copy of ${remoteBaseVmUuid})`)
}
}
const baseVdis = {}
baseVm?.$VBDs.forEach(vbd => {
const vdi = vbd.$VDI
if (vdi !== undefined) {
baseVdis[vbd.VDI] = vbd.$VDI
}
})
vm.other_config[TAG_COPY_SRC] = vm.uuid
if (!_warmMigration) {
vm.tags.push('Continuous Replication')
}
Object.values(backup.vdis).forEach(vdi => {
vdi.other_config[TAG_COPY_SRC] = vdi.uuid
vdi.SR = sr.$ref
// vdi.other_config[TAG_BASE_DELTA] is never defined on a suspend vdi
if (vdi.other_config[TAG_BASE_DELTA]) {
const remoteBaseVdiUuid = vdi.other_config[TAG_BASE_DELTA]
const baseVdi = find(baseVdis, vdi => vdi.other_config[TAG_COPY_SRC] === remoteBaseVdiUuid)
if (!baseVdi) {
throw new Error(`missing base VDI (copy of ${remoteBaseVdiUuid})`)
}
vdi.baseVdi = baseVdi
}
})
return backup
}
async _transfer({ timestamp, deltaExport, sizeContainers, vm }) {
const { _warmMigration } = this._settings
const sr = this._sr
const job = this._job
const scheduleId = this._scheduleId
const { uuid: srUuid, $xapi: xapi } = sr
let targetVmRef
await Task.run({ name: 'transfer' }, async () => {
targetVmRef = await importIncrementalVm(this.#decorateVmMetadata(deltaExport), sr)
return {
size: Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0),
}
})
this._targetVmRef = targetVmRef
const targetVm = await xapi.getRecord('VM', targetVmRef)
await Promise.all([
// warm migration does not disable HA , since the goal is to start the new VM in production
!_warmMigration &&
targetVm.ha_restart_priority !== '' &&
Promise.all([targetVm.set_ha_restart_priority(''), targetVm.add_tags('HA disabled')]),
targetVm.set_name_label(`${vm.name_label} - ${job.name} - (${formatFilenameDate(timestamp)})`),
asyncMap(['start', 'start_on'], op =>
targetVm.update_blocked_operations(
op,
'Start operation for this vm is blocked, clone it if you want to use it.'
)
),
targetVm.update_other_config({
[TAG_BACKUP_SR]: srUuid,
// these entries need to be added in case of offline backup
'xo:backup:datetime': formatDateTime(timestamp),
'xo:backup:job': job.id,
'xo:backup:schedule': scheduleId,
[TAG_BASE_DELTA]: vm.uuid,
}),
])
}
}

View File

@@ -0,0 +1,12 @@
import { AbstractWriter } from './_AbstractWriter.mjs'
export class AbstractFullWriter extends AbstractWriter {
async run({ maxStreamLength, timestamp, sizeContainer, stream, streamLength, vm, vmSnapshot }) {
try {
return await this._run({ maxStreamLength, timestamp, sizeContainer, stream, streamLength, vm, vmSnapshot })
} finally {
// ensure stream is properly closed
stream.destroy()
}
}
}

View File

@@ -1,8 +1,6 @@
'use strict'
import { AbstractWriter } from './_AbstractWriter.mjs'
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
export class AbstractIncrementalWriter extends AbstractWriter {
checkBaseVdis(baseUuidToSrcVdi, baseVm) {
throw new Error('Not implemented')
}
@@ -15,9 +13,9 @@ exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
throw new Error('Not implemented')
}
async transfer({ timestamp, deltaExport, sizeContainers }) {
async transfer({ deltaExport, ...other }) {
try {
return await this._transfer({ timestamp, deltaExport, sizeContainers })
return await this._transfer({ deltaExport, ...other })
} finally {
// ensure all streams are properly closed
for (const stream of Object.values(deltaExport.streams)) {

View File

@@ -0,0 +1,29 @@
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getVmBackupDir } from '../../_getVmBackupDir.mjs'
export class AbstractWriter {
constructor({ config, healthCheckSr, job, vmUuid, scheduleId, settings }) {
this._config = config
this._healthCheckSr = healthCheckSr
this._job = job
this._scheduleId = scheduleId
this._settings = settings
this._vmUuid = vmUuid
}
beforeBackup() {}
afterBackup() {}
healthCheck(sr) {}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`))
return actualMetadata
} catch (error) {}
}
}

View File

@@ -1,29 +1,27 @@
'use strict'
import { createLogger } from '@xen-orchestra/log'
import { join } from 'node:path'
import assert from 'node:assert'
const { createLogger } = require('@xen-orchestra/log')
const { join } = require('path')
const assert = require('assert')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup.js')
const { ImportVmBackup } = require('../ImportVmBackup.js')
const { Task } = require('../Task.js')
const MergeWorker = require('../merge-worker/index.js')
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getVmBackupDir } from '../../_getVmBackupDir.mjs'
import { HealthCheckVmBackup } from '../../HealthCheckVmBackup.mjs'
import { ImportVmBackup } from '../../ImportVmBackup.mjs'
import { Task } from '../../Task.mjs'
import * as MergeWorker from '../../merge-worker/index.mjs'
const { info, warn } = createLogger('xo:backups:MixinBackupWriter')
exports.MixinBackupWriter = (BaseClass = Object) =>
class MixinBackupWriter extends BaseClass {
export const MixinRemoteWriter = (BaseClass = Object) =>
class MixinRemoteWriter extends BaseClass {
#lock
constructor({ remoteId, ...rest }) {
constructor({ remoteId, adapter, ...rest }) {
super(rest)
this._adapter = rest.backup.remoteAdapters[remoteId]
this._adapter = adapter
this._remoteId = remoteId
this._vmBackupDir = getVmBackupDir(this._backup.vm.uuid)
this._vmBackupDir = getVmBackupDir(rest.vmUuid)
}
async _cleanVm(options) {
@@ -38,7 +36,7 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
Task.warning(message, data)
},
lock: false,
mergeBlockConcurrency: this._backup.config.mergeBlockConcurrency,
mergeBlockConcurrency: this._config.mergeBlockConcurrency,
})
})
} catch (error) {
@@ -55,10 +53,10 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
async afterBackup() {
const { disableMergeWorker } = this._backup.config
const { disableMergeWorker } = this._config
// merge worker only compatible with local remotes
const { handler } = this._adapter
const willMergeInWorker = !disableMergeWorker && typeof handler._getRealPath === 'function'
const willMergeInWorker = !disableMergeWorker && typeof handler.getRealPath === 'function'
const { merge } = await this._cleanVm({ remove: true, merge: !willMergeInWorker })
await this.#lock.dispose()
@@ -70,13 +68,15 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
// add a random suffix to avoid collision in case multiple tasks are created at the same second
Math.random().toString(36).slice(2)
await handler.outputFile(taskFile, this._backup.vm.uuid)
const remotePath = handler._getRealPath()
await handler.outputFile(taskFile, this._vmUuid)
const remotePath = handler.getRealPath()
await MergeWorker.run(remotePath)
}
}
healthCheck(sr) {
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notStrictEqual(
this._metadataFileName,
undefined,
@@ -109,4 +109,16 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
)
}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(
adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`)
)
return actualMetadata
} catch (error) {}
}
}

View File

@@ -0,0 +1,72 @@
import { extractOpaqueRef } from '@xen-orchestra/xapi'
import assert from 'node:assert/strict'
import { HealthCheckVmBackup } from '../../HealthCheckVmBackup.mjs'
import { Task } from '../../Task.mjs'
export const MixinXapiWriter = (BaseClass = Object) =>
class MixinXapiWriter extends BaseClass {
constructor({ sr, ...rest }) {
super(rest)
this._sr = sr
}
// check if the base Vm has all its disk on health check sr
async #isAlreadyOnHealthCheckSr(baseVm) {
const xapi = baseVm.$xapi
const vdiRefs = await xapi.VM_getDisks(baseVm.$ref)
for (const vdiRef of vdiRefs) {
const vdi = xapi.getObject(vdiRef)
if (vdi.$SR.uuid !== this._healthCheckSr.uuid) {
return false
}
}
return true
}
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notEqual(this._targetVmRef, undefined, 'A vm should have been transfered to be health checked')
// copy VM
return Task.run(
{
name: 'health check',
},
async () => {
const { $xapi: xapi } = sr
let healthCheckVmRef
try {
const baseVm = xapi.getObject(this._targetVmRef) ?? (await xapi.waitObject(this._targetVmRef))
if (await this.#isAlreadyOnHealthCheckSr(baseVm)) {
healthCheckVmRef = await Task.run(
{ name: 'cloning-vm' },
async () =>
await xapi
.callAsync('VM.clone', this._targetVmRef, `Health Check - ${baseVm.name_label}`)
.then(extractOpaqueRef)
)
} else {
healthCheckVmRef = await Task.run(
{ name: 'copying-vm' },
async () =>
await xapi
.callAsync('VM.copy', this._targetVmRef, `Health Check - ${baseVm.name_label}`, sr.$ref)
.then(extractOpaqueRef)
)
}
const healthCheckVm = xapi.getObject(healthCheckVmRef) ?? (await xapi.waitObject(healthCheckVmRef))
await new HealthCheckVmBackup({
restoredVm: healthCheckVm,
xapi,
}).run()
} finally {
healthCheckVmRef && (await xapi.VM_destroy(healthCheckVmRef))
}
}
)
}
}

View File

@@ -0,0 +1,6 @@
import { openVhd } from 'vhd-lib'
import Disposable from 'promise-toolbox/Disposable'
export async function checkVhd(handler, path) {
await Disposable.use(openVhd(handler, path), () => {})
}

View File

@@ -1,5 +1,3 @@
'use strict'
const getReplicatedVmDatetime = vm => {
const { 'xo:backup:datetime': datetime = vm.name_label.slice(-17, -1) } = vm.other_config
return datetime
@@ -7,7 +5,7 @@ const getReplicatedVmDatetime = vm => {
const compareReplicatedVmDatetime = (a, b) => (getReplicatedVmDatetime(a) < getReplicatedVmDatetime(b) ? -1 : 1)
exports.listReplicatedVms = function listReplicatedVms(xapi, scheduleOrJobId, srUuid, vmUuid) {
export function listReplicatedVms(xapi, scheduleOrJobId, srUuid, vmUuid) {
const { all } = xapi.objects
const vms = {}
for (const key in all) {

View File

@@ -1,7 +1,5 @@
'use strict'
const PARSE_UUID_RE = /-/g
exports.packUuid = function packUuid(uuid) {
export function packUuid(uuid) {
return Buffer.from(uuid.replace(PARSE_UUID_RE, ''), 'hex')
}

View File

@@ -1,6 +1,4 @@
'use strict'
exports.watchStreamSize = function watchStreamSize(stream, container = { size: 0 }) {
export function watchStreamSize(stream, container = { size: 0 }) {
stream.on('data', data => {
container.size += data.length
})

View File

@@ -171,13 +171,16 @@ job:
# For replication jobs, indicates which SRs to use
srs: IdPattern
# Here for historical reasons
type: 'backup'
type: 'backup' | 'mirrorBackup'
# Indicates which VMs to backup/replicate
# Indicates which VMs to backup/replicate for a xapi to remote backup job
vms: IdPattern
# Indicates which remote to read from for a mirror backup job
sourceRemote: IdPattern
# Indicates which XAPI to use to connect to a specific VM or SR
# for remote to remote backup job,this is only needed if there is healtcheck
recordToXapi:
[ObjectId]: XapiId
@@ -228,7 +231,7 @@ Settings are described in [`@xen-orchestra/backups/Backup.js](https://github.com
- `prepare({ isFull })`
- `transfer({ timestamp, deltaExport, sizeContainers })`
- `cleanup()`
- `healthCheck(sr)`
- `healthCheck()` // is not executed if no health check sr or tag doesn't match
- **Full**
- `run({ timestamp, sizeContainer, stream })`
- `afterBackup()`

View File

@@ -1,6 +1,4 @@
'use strict'
exports.extractIdsFromSimplePattern = function extractIdsFromSimplePattern(pattern) {
export function extractIdsFromSimplePattern(pattern) {
if (pattern === undefined) {
return []
}

View File

@@ -1,7 +1,5 @@
'use strict'
const mapValues = require('lodash/mapValues.js')
const { dirname } = require('path')
import mapValues from 'lodash/mapValues.js'
import { dirname } from 'node:path'
function formatVmBackup(backup) {
return {
@@ -31,6 +29,6 @@ function formatVmBackup(backup) {
}
// format all backups as returned by RemoteAdapter#listAllVmBackups()
exports.formatVmBackups = function formatVmBackups(backupsByVM) {
export function formatVmBackups(backupsByVM) {
return mapValues(backupsByVM, backups => backups.map(formatVmBackup))
}

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env node
// eslint-disable-next-line eslint-comments/disable-enable-pair
/* eslint-disable n/shebang */
'use strict'
const { catchGlobalErrors } = require('@xen-orchestra/log/configure')
const { createLogger } = require('@xen-orchestra/log')
const { getSyncedHandler } = require('@xen-orchestra/fs')
const { join } = require('path')
const Disposable = require('promise-toolbox/Disposable')
const min = require('lodash/min')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { RemoteAdapter } = require('../RemoteAdapter.js')
const { CLEAN_VM_QUEUE } = require('./index.js')
// -------------------------------------------------------------------
catchGlobalErrors(createLogger('xo:backups:mergeWorker'))
const { fatal, info, warn } = createLogger('xo:backups:mergeWorker')
// -------------------------------------------------------------------
const main = Disposable.wrap(async function* main(args) {
const handler = yield getSyncedHandler({ url: 'file://' + process.cwd() })
yield handler.lock(CLEAN_VM_QUEUE)
const adapter = new RemoteAdapter(handler)
const listRetry = async () => {
const timeoutResolver = resolve => setTimeout(resolve, 10e3)
for (let i = 0; i < 10; ++i) {
const entries = await handler.list(CLEAN_VM_QUEUE)
if (entries.length !== 0) {
return entries
}
await new Promise(timeoutResolver)
}
}
let taskFiles
while ((taskFiles = await listRetry()) !== undefined) {
const taskFileBasename = min(taskFiles)
const previousTaskFile = join(CLEAN_VM_QUEUE, taskFileBasename)
const taskFile = join(CLEAN_VM_QUEUE, '_' + taskFileBasename)
// move this task to the end
try {
await handler.rename(previousTaskFile, taskFile)
} catch (error) {
// this error occurs if the task failed too many times (i.e. too many `_` prefixes)
// there is nothing more that can be done
if (error.code === 'ENAMETOOLONG') {
await handler.unlink(previousTaskFile)
}
throw error
}
try {
const vmDir = getVmBackupDir(String(await handler.readFile(taskFile)))
try {
await adapter.cleanVm(vmDir, { merge: true, logInfo: info, logWarn: warn, remove: true })
} catch (error) {
// consider the clean successful if the VM dir is missing
if (error.code !== 'ENOENT') {
throw error
}
}
handler.unlink(taskFile).catch(error => warn('deleting task failure', { error }))
} catch (error) {
warn('failure handling task', { error })
}
}
})
info('starting')
main(process.argv.slice(2)).then(
() => {
info('bye :-)')
},
error => {
fatal(error)
process.exit(1)
}
)

Some files were not shown because too many files have changed in this diff Show More