Compare commits

...

160 Commits

Author SHA1 Message Date
Florent BEAUCHAMP
ec82acef29 feat(vhd-lib): slim down key backup when using block based backup 2023-06-11 13:47:04 +02:00
Thierry Goettelmann
27b5737f65 feat(lite/pool/VMs): ability to copy selected VMs (#6847) 2023-06-09 14:59:39 +02:00
Julien Fontanet
55b2e0292f docs(task): describe combined task log 2023-06-09 09:45:46 +02:00
Julien Fontanet
464d83e70f feat(xo-web): implement XO task abortion 2023-06-09 09:45:46 +02:00
Julien Fontanet
614255a73a chore(xo-web): remove now unused aborted task status 2023-06-09 09:45:46 +02:00
Julien Fontanet
90d15e1346 feat(task): remove aborted status and add abortionRequested event
BREAKING CHANGE.
2023-06-09 09:45:46 +02:00
Julien Fontanet
b0e2ea64e9 feat(xo-server/test.createTask): dynamic name and progress 2023-06-08 14:38:22 +02:00
Julien Fontanet
1da05e239d feat(task): merge custom data into properties
BREAKING CHANGE.

This makes these entries mutable during the life of the task.
2023-06-08 14:38:22 +02:00
Thierry Goettelmann
fe7f0db81f feat(lite): revamp XAPI subscription and add immediate option (#6877)
`subscribe()` now accepts an `{ immediate: false }` option.
In this case, the subscription is deferred and can be initialized later with `.start()`.
A `createSubscribe` helper has been added to create an overridden `subscribe` function.
Full documentation has been added to `docs/xen-api-record-stores.md`.
2023-06-08 14:33:38 +02:00
rbarhtaoui
983153e620 feat(lite/pool/tasks): display an error msg if data cannot be fetched (#6777) 2023-06-08 09:21:39 +02:00
Thierry Goettelmann
6fe791dcf2 feat(lite/dashboard): revamp pool dashboard (#6815)
Reworked the pool dashboard to reorder components, simplify the code, and make
the design closer to the Figma mockups.
Added a new `PoolDashboardComingSoon` component for dashboard items that are not
ready yet.
Removed `height: fit-content` from UiCard which should not be needed anymore and
have only recent (~1.4 year) support on Firefox.
2023-06-07 14:41:08 +02:00
Florent BEAUCHAMP
1ad406c7dd test(nbd-client): test secure connection 2023-06-07 10:24:14 +02:00
Florent BEAUCHAMP
4e032e11b1 fix(nbd-client/readBlocks): BigInt handling for default generator 2023-06-07 10:24:14 +02:00
Julien Fontanet
ea34516d73 test(vhd-lib): from Jest to test 2023-06-07 10:24:14 +02:00
Thierry Goettelmann
e1145f35ee feat(lite): introduce POWER_STATE and VM_OPERATION enums (#6846) 2023-06-07 10:13:29 +02:00
Thierry Goettelmann
6864775b8a fix(lite/AppMenu): AppMenu is not displayed correctly (#6819)
The visibility of AppMenu was previously constrained to its container boundaries
2023-06-07 09:22:27 +02:00
rbarhtaoui
f28721b847 feat(lite/pool/VMs): ability to change the VMs power state (#6782) 2023-06-06 15:46:24 +02:00
Julien Fontanet
2dc174fd9d test(task/combineEvents): use variable to ease test maintenance 2023-06-06 10:29:47 +02:00
Julien Fontanet
07142d0410 test(task/combineEvents): test id, start and end properties 2023-06-05 15:29:12 +02:00
Julien Fontanet
41bb16ca30 feat: release 5.83.2 2023-06-01 15:36:48 +02:00
Julien Fontanet
d8f1034858 feat: technical release 2023-06-01 14:25:08 +02:00
Julien Fontanet
52b3c49cdb feat(xo-server): 5.116.3 2023-06-01 14:24:58 +02:00
Julien Fontanet
c5cb1a5e96 feat(@xen-orchestra/proxy): 0.26.27 2023-06-01 14:24:07 +02:00
Julien Fontanet
92d9d3232c feat(@xen-orchestra/backups): 0.38.2 2023-06-01 14:23:49 +02:00
Florent BEAUCHAMP
9c4e0464f0 fix(backups): fix vm is undefined error (#6873) 2023-06-01 14:21:43 +02:00
Julien Fontanet
72d25754fd feat: release 5.83.1 2023-06-01 12:00:06 +02:00
Julien Fontanet
1465a0ba59 feat(xo-server): 5.116.2 2023-06-01 11:30:47 +02:00
Julien Fontanet
ac8ce28286 fix(xo-server): don't require start for Redis collections (2)
Missing changed from fba86bf65
2023-06-01 11:08:52 +02:00
Julien Fontanet
c4b06e1915 feat(xo-server): 5.116.1 2023-06-01 10:48:20 +02:00
Julien Fontanet
f77675a8a3 feat(@xen-orchestra/proxy): 0.26.26 2023-06-01 10:46:31 +02:00
Julien Fontanet
b907c1fd03 feat(@xen-orchestra/backups): 0.38.1 2023-06-01 10:46:15 +02:00
Julien Fontanet
fba86bf653 fix(xo-server): don't require start for Redis collections (#6872)
Introduced by 9f3b02036

Redis connection is usable right after starting the core, therefore collections can be created
on the `core started` event and does not require for the (much heavier) `start` hook to run.

This change fixes `xo-server-recover-account`.
2023-06-01 10:36:02 +02:00
Florent BEAUCHAMP
b18ebcc38d fix(backups): fix CR not deleting older VM (#6871)
scheduleId was not passed to the writers constructor. It leads to missing scheduleId in metadata (I think there is no consequence), and a bad filter to detect VM to delete after a successfull replication

Users may need to delete manually the VM created that way
2023-06-01 10:33:33 +02:00
Mathieu
4f7f18458e fix(lite/console): fix console not updating when changing VM (#6850)
Introduced by 5237fdd387

`WatchEffect` is called before `Watch` so the connection was "created" then
"cleaned"
2023-05-31 16:26:19 +02:00
Julien Fontanet
d412196052 fix(CHANGELOG): badges
Introduced by 1d140d8fd
2023-05-31 16:06:28 +02:00
Julien Fontanet
1d140d8fd2 feat: release 5.83.0 2023-05-31 16:05:18 +02:00
Thierry Goettelmann
6948a25b09 fix(lite/markdown): vue code fence are no longer detected (#6845)
The `vue-template`, `vue-script`, and `vue-style` code fences were no longer
detected, and thus were no longer highlighted.
2023-05-31 15:25:59 +02:00
Julien Fontanet
26131917e3 feat(xo-web): 5.119.1 2023-05-31 11:22:12 +02:00
Mathieu
44a0ab6d0a fix(xo-web/overview): fix isMirrorBackup is not defined (#6870) 2023-05-31 11:06:03 +02:00
Julien Fontanet
2b8b033ad7 feat: technical release 2023-05-31 09:51:53 +02:00
Julien Fontanet
3ee0b3e7df feat(xo-web): 5.119.0 2023-05-31 09:47:42 +02:00
Julien Fontanet
927a55ab30 feat(xo-server): 5.116.0 2023-05-31 09:46:41 +02:00
Julien Fontanet
b70721cb60 feat(@xen-orchestra/proxy): 0.26.25 2023-05-31 09:44:14 +02:00
Julien Fontanet
f71c820f15 feat(@xen-orchestra/backups-cli): 1.0.8 2023-05-31 09:43:59 +02:00
Julien Fontanet
74e0405a5e feat(@xen-orchestra/backups): 0.38.0 2023-05-31 09:40:48 +02:00
Julien Fontanet
79b55ba30a feat(vhd-lib): 4.5.0 2023-05-31 09:36:01 +02:00
Mathieu
ee0adaebc5 feat(xo-web/backup): UI mirror backup implementation (#6858)
See #6854
2023-05-31 09:12:46 +02:00
Julien Fontanet
83c5c976e3 feat(xo-server/rest-api): limit patches listing and RPU (#6864)
Same restriction as in the UI.
2023-05-31 08:49:32 +02:00
Julien Fontanet
18bd2c607e feat(xo-server/backupNg.checkBackup): add basic XO task 2023-05-30 16:51:43 +02:00
Julien Fontanet
e2695ce327 fix(xo-server/clearHost): explicit message on missing migration network
Fixes zammad#14882
2023-05-30 16:50:50 +02:00
Florent BEAUCHAMP
3f316fcaea fix(backups): handles task end in CR without health check (#6866) 2023-05-30 16:06:23 +02:00
Florent BEAUCHAMP
8b7b162c76 feat(backups): implement mirror backup 2023-05-30 15:21:53 +02:00
Florent BEAUCHAMP
aa36629def refactor(backup/writers): pass the vm and snapshot in transfer/run 2023-05-30 15:21:53 +02:00
Pierre Donias
ca345bd6d8 feat(xo-web/task): action to open task REST API URL (#6869) 2023-05-30 14:19:50 +02:00
Florent BEAUCHAMP
61324d10f9 fix(xo-web): VHD directory tooltip (#6865) 2023-05-30 09:27:24 +02:00
Pierre Donias
92fd92ae63 feat(xo-web): XO Tasks (#6861) 2023-05-30 09:20:51 +02:00
Julien Fontanet
e48bfa2c88 feat: technical release 2023-05-26 16:50:04 +02:00
Julien Fontanet
cd5762fa19 feat(xo-web): 5.118.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
71f7a6cd6c feat(xo-server): 5.115.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8cade8b7a feat(xo-cli): 0.19.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
696c6f13f0 feat(vhd-cli): 0.9.3 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8d923d3ba feat(xo-vmdk-to-vhd): 2.5.5 2023-05-26 16:38:38 +02:00
Julien Fontanet
1a96c1bf0f feat(@xen-orchestra/proxy): 0.26.24 2023-05-26 16:38:38 +02:00
Julien Fontanet
14a01d0141 feat(@xen-orchestra/mixins): 0.10.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
74a2a4d2e5 feat(@xen-orchestra/backups-cli): 1.0.7 2023-05-26 16:38:38 +02:00
Julien Fontanet
b13b44cfd0 feat(@xen-orchestra/backups): 0.37.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
50a164423a feat(@xen-orchestra/xapi): 2.2.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
a40d50a3bd feat(vhd-lib): 4.4.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
529e33140a feat(@xen-orchestra/fs): 4.0.0 2023-05-26 16:38:38 +02:00
Mathieu
132b1a41db fix(xo-web/host-item): display alert in host-item for host inconsistent time (#6833)
See xoa-support#14626
Introduced by aadc1bb84c
2023-05-26 16:17:04 +02:00
Julien Fontanet
75948b2977 feat(xo-server/rest-api): endpoints to list pools/hosts missing patches 2023-05-26 16:11:11 +02:00
Gabriel Gunullu
eb84d4a7ef feat(xo-web/kubernetes): add number of cp choice (#6809)
See xoa#120
2023-05-26 16:08:11 +02:00
Julien Fontanet
1816d0240e refactor(fs): separate internal and public interfaces
Public interfaces may be decorated with behaviors (e.g. concurrency limits, path rewriting) which
makes them unsuitable from being called from inside the class or its children.

Internal interfaces are now prefixed with `__`.
2023-05-26 15:32:56 +02:00
Julien Fontanet
2c6d36b63e refactor(fs): use private fields where appropriate 2023-05-26 15:32:56 +02:00
Mathieu
d9776ae8ed fix(xo-web): fix various 'an error has occurred' (#6848)
See xoa-support#14631
2023-05-26 14:45:29 +02:00
Florent BEAUCHAMP
b456394663 refactor(backups): extract method forkDeltaExport 2023-05-26 13:01:15 +02:00
Florent BEAUCHAMP
94f599bdbd refactor(backups/RemoteAdapter): extract method listAllVms 2023-05-26 13:01:08 +02:00
Florent BEAUCHAMP
d466ca143a refactor(backups/runner): Vms -> VmsXapi 2023-05-26 12:48:56 +02:00
Florent BEAUCHAMP
78ed85a49f feat(backups): add ability to read only one delta instead of the full chain 2023-05-26 12:47:42 +02:00
Florent BEAUCHAMP
c24e7f9ecd refactor(backup/remoteAdapter): readDeltaVmBackup -> readIncrementalVmBackup 2023-05-26 12:24:56 +02:00
Mathieu
98caa89625 feat(xo-web/self): add default tags for self service users (#6810)
See #6812

Add default tags for Self Service users.
2023-05-26 11:45:05 +02:00
Pierre Donias
8e176eadb1 fix(xo-web): show Suse icon when distro name is opensuse (#6852)
See #6676
See #6746
See https://xcp-ng.org/forum/topic/6965
2023-05-26 09:24:30 +02:00
Julien Fontanet
444268406f fix(mixins/Tasks): update updatedAt when marking tasks as interrupted 2023-05-25 16:06:09 +02:00
Thierry Goettelmann
7e062977d0 feat(lite/component): add new Vue component UiCardSpinner (#6806)
`UiSpinner` is often used to add a spinner inside an `UiCard`, applying similar
styles. This `UiCardSpinner` component creates a homogeneous spinner to use in
theses cases.
2023-05-25 14:00:23 +02:00
Mathieu
f4bf56f159 feat(xo-web/self): ability to share VMs by default (#6838)
See xoa-support#7420
2023-05-25 11:00:04 +02:00
Julien Fontanet
9f3b020361 fix(xo-server): create collection after connected to Redis
Introduced by 36b94f745

Redis is now connected in `start core` hook and should not be used before.

Some minor initialization stuff (namespace and version registration) where failing silently before
this fix.
2023-05-24 17:40:20 +02:00
Julien Fontanet
ef35021a44 chore(backups,xo-server): use extractOpaqueRef from @xen-orchestra/xapi
Instead of custom implementations.
2023-05-24 12:09:42 +02:00
Julien Fontanet
b74ebd050a feat(xapi/extractOpaqueRef): expose it publicly 2023-05-24 12:07:54 +02:00
Julien Fontanet
8a16d6aa3b feat(xapi/extractOpaqueRef): add searched string to error
Helps debugging.
2023-05-24 12:07:22 +02:00
Julien Fontanet
cf7393992c chore(xapi/extractOpaqueRef): named function for better stacktraces 2023-05-24 12:05:56 +02:00
Thierry Goettelmann
c576114dad feat(lite): new FormInputGroup component (#6740) 2023-05-23 16:58:39 +02:00
Julien Fontanet
deeb399046 feat(xo-server/rest-api): rolling_update pool action 2023-05-23 15:35:32 +02:00
Julien Fontanet
9cf8f8f492 chore(xo-server/rest-api): also pass xoObject to actions 2023-05-23 15:35:32 +02:00
Julien Fontanet
28b7e99ebc chore(xo-server): move RPU logic from API layer to XenServers mixin 2023-05-23 15:35:32 +02:00
rbarhtaoui
0ba729e5b9 feat(lite/pool/dashboard): display error message when data is not fetched (#6776) 2023-05-23 14:40:43 +02:00
Florent BEAUCHAMP
ac8c146cf7 refactor(backups): separate full and incremental VM runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
2ba437be31 refactor(backups): separate VMs and metadata runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
bd8bb73309 refactor(backups): move Runner, VmBackup, writers and specific method to a private folder 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
485c2f4669 refactor(backups/Backup.createRunner): factory
BREAKING CHANGE: Backup can no longer be instantiated directly.
2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
6fb562d92f refactor(backups/Backup): extract getAdaptersByRemote, RemoteTimeoutError and runTasks 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
85efdcf7b9 refactor(backups/_incrementalVm): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
fc1357d5d6 refactor(backups): _deltaVm → _incrementalVm 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
88b015bda4 refactor(backups/writers) : replication → xapi 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
b46f76cccf refactor(backups/writers): backup → remote 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
c3bb2185c2 refactor(backups/writers): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
a240853fe0 refactor(backups/_VmBackup): delta → incremental 2023-05-23 09:27:47 +02:00
Thierry Goettelmann
d7ce609940 chore(lite): upgrade dependencies (#6843) 2023-05-22 10:41:39 +02:00
Florent BEAUCHAMP
1b0ec9839e fix(xo-server): import OVA with broken VMDK size in metadata (#6824)
ova generated from oracle virtualization server seems to have the size of the vmdk
instead of the disk size in the metadata

this will cause the transfer to fail when the import try to write data

after the size of the vmdk, for example a 50GB disk make a 10GB vmdk. It will fail when import reach data in the 10-50GB range
2023-05-22 10:20:04 +02:00
Julien Fontanet
77b166bb3b chore: update dev deps 2023-05-22 10:01:54 +02:00
Julien Fontanet
76bd54d7de chore: update dev deps 2023-05-17 14:48:41 +02:00
Julien Fontanet
684282f0a4 fix(mixins/Tasks): correctly serialize errors 2023-05-17 11:29:28 +02:00
Julien Fontanet
2459f46c19 feat(xo-cli rest): accept query string in path
Example:
```
xo-cli rest post vms/<uuid>/actions/snapshot?sync
```
2023-05-17 11:27:29 +02:00
Julien Fontanet
5f0466e4d8 feat: release 5.82.2 2023-05-17 10:05:11 +02:00
Gabriel Gunullu
3738edfa83 test(@xen-orchestra/fs): from Jest to test (#6820) 2023-05-17 09:54:51 +02:00
Julien Fontanet
769e27e2cb feat: technical release 2023-05-16 16:32:33 +02:00
Julien Fontanet
8ec5461338 feat(xo-server): 5.114.2 2023-05-16 16:31:54 +02:00
Julien Fontanet
4a2843cb67 feat(@xen-orchestra/proxy): 0.26.23 2023-05-16 16:31:33 +02:00
Julien Fontanet
a0e69a79ab feat(xen-api): 1.3.1 2023-05-16 16:30:54 +02:00
Roni Väyrynen
3da94f18df docs(installation): add findmnt command to sudoers config example (#6835) 2023-05-16 15:20:47 +02:00
Mathieu
17cb59b898 feat(xo-web/host-item): display warning for when HVM disabled (#6834) 2023-05-16 14:58:14 +02:00
Mathieu
315e5c9289 feat(xo-web/proxy): make proxy address editable (#6816) 2023-05-16 12:12:31 +02:00
Julien Fontanet
01ba10fedb fix(xen-api/putResource): really fix (302) redirection with non-stream body
Replaces the incorrect fix in 87e6f7fde

Introduced by ab96c549a

Fixes zammad#13375
Fixes zammad#13952
Fixes zammad#14001
2023-05-15 16:23:18 +02:00
Mathieu
13e7594560 fix(xo-web/SortedTable): handle pending state for collapsed actions (#6831) 2023-05-15 15:27:17 +02:00
Thierry Goettelmann
f9ac2ac84d feat(lite/tooltips): enhance and simplify tooltips (#6760)
- Removed the `disabled` option.
- The tooltip is now disabled when content is an empty string or `false`.
- If content is `true` or `undefined`, it will be extracted from element's `innerText`.
- Moved `v-tooltip` from `InfraHostItem` and `InfraVmItem` to `InfraItemLabel`.
2023-05-15 11:55:43 +02:00
Thierry Goettelmann
09cfac1111 feat(lite): enhance Component Story skeleton generator (#6753)
- Updated form to use our own components
- Added a warning for props whose type cannot be extracted
- Fixed setting name for scopes containing a dash
- Handled cases when a prop can be multiple types
- Better guess of prop type
- Remove `.widget()` for `.model()`
- Remove `.event('update:modelValue')` for `.model()`
2023-05-15 11:23:42 +02:00
Thierry Goettelmann
008f7a30fd feat(lite): add VM tab bar (#6766) 2023-05-15 11:15:52 +02:00
Thierry Goettelmann
ff65dbcba7 feat(lite): extract and update "unreachable hosts modal" (#6745)
Extraction of unreachable host modal to its own component + Move the subtitle to the description.

Refer to #6744 for final design.
2023-05-15 11:11:19 +02:00
ggunullu
264a0d1678 fix(@vates/nbd-client): add custom coverage threshold to tap test
By default, Tap require 100 % coverage of all lines, branches, functions and statements.
We enforce a custom threshold to match the current state of the state and avoid regression.

See https://github.com/vatesfr/xen-orchestra/actions/runs/4956232764/jobs/8866437368
2023-05-15 10:18:02 +02:00
ggunullu
7dcaf454ed fix(eslint): treat *.integ.js as test files
Introduced by 3f73138fc3
2023-05-15 10:18:02 +02:00
Julien Fontanet
17b2756291 feat: release 5.82.1 2023-05-12 16:47:21 +02:00
Julien Fontanet
57e48b5d34 feat: technical release 2023-05-12 15:40:38 +02:00
Julien Fontanet
57ed984e5a feat(xo-web): 5.117.1 2023-05-12 15:40:16 +02:00
Julien Fontanet
100122f388 feat(xo-server): 5.114.1 2023-05-12 15:39:36 +02:00
Julien Fontanet
12d4b3396e feat(@xen-orchestra/proxy): 0.26.22 2023-05-12 15:39:16 +02:00
Julien Fontanet
ab35c710cb feat(@xen-orchestra/backups): 0.36.1 2023-05-12 15:38:46 +02:00
Florent BEAUCHAMP
4bd5b38aeb fix(backups): fix health check task during CR (#6830)
Fixes https://xcp-ng.org/forum/post/62073

`healthCheck` is launched after `cleanVm`, therefore it should be closing the parent task, not `cleanVm`.
2023-05-12 10:45:32 +02:00
Julien Fontanet
836db1b807 fix(xo-web/new/network): correct type for vlan (#6829)
BREAKING CHANGE: API method `network.create` no longer accepts a `string` for `vlan` param.

Fixes https://xcp-ng.org/forum/post/62090

Either `number` or `undefined`, not an empty string.
2023-05-12 10:36:59 +02:00
Julien Fontanet
73d88cc5f1 fix(xo-server/vm.convertToTemplate): handle VBD_IS_EMPTY (#6808)
Fixes https://xcp-ng.org/forum/post/61653
2023-05-12 09:12:41 +02:00
Julien Fontanet
3def66d968 chore(xo-vmdk-to-vhd): move notes.md to docs/
So that it will be correctly ignored when publishing the package.
2023-05-12 09:10:00 +02:00
Gabriel Gunullu
3f73138fc3 fix(test-integration): run integration tests only in ci (#6826)
Fixes issues introduced by

- be6233f
- adc5e7d

After the switching from Jest to Tap/Test, those tests were no longer executed during the test-integration script.
2023-05-11 17:47:48 +02:00
Julien Fontanet
bfe621a21d feat: technical release 2023-05-11 14:35:15 +02:00
Julien Fontanet
32fa792eeb feat(xo-web): 5.117.0 2023-05-11 14:23:02 +02:00
Julien Fontanet
a833050fc2 feat(xo-server): 5.114.0 2023-05-11 14:17:40 +02:00
Julien Fontanet
e7e6294bc3 feat(xo-vmdk-to-vhd): 2.5.4 2023-05-11 14:09:23 +02:00
Julien Fontanet
7c71884e27 feat(@vates/task): 0.1.2 2023-05-11 14:03:57 +02:00
Florent BEAUCHAMP
3e822044f2 fix(xo-vmdk-to-vhd): wait for OVA stream to be written before reading more data (#6800) 2023-05-11 12:23:06 +02:00
Julien Fontanet
d457f5fca4 chore(xo-server): use Task.run() helper 2023-05-11 11:10:00 +02:00
Julien Fontanet
1837e01719 fix(xo-server): new Task() now expects data instead of name option
Introduced by 036f3f6bd
2023-05-11 11:08:31 +02:00
Julien Fontanet
f17f5abf0f fix(xo-server/pif.reconfigureIp): accepts empty strings for dns, gateway, ip and netmask params 2023-05-11 09:08:05 +02:00
Florent BEAUCHAMP
82c229c755 fix(xo-server): better handling of importing running VM from ESXi (#6825)
Fixes https://xcp-ng.org/forum/post/59879

Fixes `Cannot read properties of undefined (reading 'stream')` error message
2023-05-10 18:25:37 +02:00
Julien Fontanet
c7e3ba3184 feat(xo-web/plugins): names can be clicked to filter out other plugins 2023-05-10 17:40:11 +02:00
Thierry Goettelmann
470c9bb6c8 fix(lite): handle escape key on CollectionFilter and CollectionSorter modals (#6822)
UiModal `@close` event was not defined on `CollectionFilter` and `CollectionSorter` modals.
2023-05-10 14:44:30 +02:00
Thierry Goettelmann
bb3ab20b2a fix(lite): typo in component name (#6821) 2023-05-10 10:11:06 +02:00
Julien Fontanet
90ce1c4d1e test(task/combineEvents): initial unit tests 2023-05-09 15:16:41 +02:00
Julien Fontanet
5c436f3870 fix(task/combineEvents): defineProperty → defineProperties
Fixes zammad#14566
2023-05-09 15:12:12 +02:00
Mathieu
159339625d feat(xo-server/vm.create): add resourceSet tags to created VM (#6812) 2023-05-09 14:33:59 +02:00
Julien Fontanet
87e6f7fded fix(xen-api/putResource): fix (302) redirection with non-stream body
Fixes zammad#13375
Fixes zammad#13952
Fixes zammad#14001
2023-05-09 14:09:33 +02:00
Pierre Donias
fd2c7c2fc3 fix(CHANGELOG): fix version number (#6805) 2023-04-28 14:52:44 +02:00
Mathieu
7fc76c1df4 feat: release 5.82 (#6804) 2023-04-28 14:32:01 +02:00
Mathieu
f2758d036d feat: technical release (#6803) 2023-04-28 13:28:15 +02:00
272 changed files with 9182 additions and 5791 deletions

View File

@@ -28,7 +28,7 @@ module.exports = {
},
},
{
files: ['*.{spec,test}.{,c,m}js'],
files: ['*.{integ,spec,test}.{,c,m}js'],
rules: {
'n/no-unpublished-require': 'off',
'n/no-unpublished-import': 'off',

View File

@@ -21,7 +21,7 @@
"fuse-native": "^2.2.6",
"lru-cache": "^7.14.0",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.4.0"
"vhd-lib": "^4.5.0"
},
"scripts": {
"postversion": "npm publish --access public"

View File

@@ -313,8 +313,8 @@ module.exports = class NbdClient {
const exportSize = this.#exportSize
const chunkSize = 2 * 1024 * 1024
indexGenerator = function* () {
const nbBlocks = Math.ceil(exportSize / chunkSize)
for (let index = 0; index < nbBlocks; index++) {
const nbBlocks = Math.ceil(Number(exportSize / BigInt(chunkSize)))
for (let index = 0; BigInt(index) < nbBlocks; index++) {
yield { index, size: chunkSize }
}
}

View File

@@ -1,76 +0,0 @@
'use strict'
const NbdClient = require('./index.js')
const { spawn } = require('node:child_process')
const fs = require('node:fs/promises')
const { test } = require('tap')
const tmp = require('tmp')
const { pFromCallback } = require('promise-toolbox')
const { asyncEach } = require('@vates/async-each')
const FILE_SIZE = 2 * 1024 * 1024
async function createTempFile(size) {
const tmpPath = await pFromCallback(cb => tmp.file(cb))
const data = Buffer.alloc(size, 0)
for (let i = 0; i < size; i += 4) {
data.writeUInt32BE(i, i)
}
await fs.writeFile(tmpPath, data)
return tmpPath
}
test('it works with unsecured network', async tap => {
const path = await createTempFile(FILE_SIZE)
const nbdServer = spawn(
'nbdkit',
[
'file',
path,
'--newstyle', //
'--exit-with-parent',
'--read-only',
'--export-name=MY_SECRET_EXPORT',
],
{
stdio: ['inherit', 'inherit', 'inherit'],
}
)
const client = new NbdClient({
address: 'localhost',
exportname: 'MY_SECRET_EXPORT',
secure: false,
})
await client.connect()
tap.equal(client.exportSize, BigInt(FILE_SIZE))
const CHUNK_SIZE = 128 * 1024 // non default size
const indexes = []
for (let i = 0; i < FILE_SIZE / CHUNK_SIZE; i++) {
indexes.push(i)
}
// read mutiple blocks in parallel
await asyncEach(
indexes,
async i => {
const block = await client.readBlock(i, CHUNK_SIZE)
let blockOk = true
let firstFail
for (let j = 0; j < CHUNK_SIZE; j += 4) {
const wanted = i * CHUNK_SIZE + j
const found = block.readUInt32BE(j)
blockOk = blockOk && found === wanted
if (!blockOk && firstFail === undefined) {
firstFail = j
}
}
tap.ok(blockOk, `check block ${i} content`)
},
{ concurrency: 8 }
)
await client.disconnect()
nbdServer.kill()
await fs.unlink(path)
})

View File

@@ -23,7 +23,7 @@
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"promise-toolbox": "^0.21.0",
"xen-api": "^1.3.0"
"xen-api": "^1.3.1"
},
"devDependencies": {
"tap": "^16.3.0",
@@ -31,6 +31,6 @@
},
"scripts": {
"postversion": "npm publish --access public",
"test-integration": "tap *.spec.js"
"test-integration": "tap --lines 97 --functions 95 --branches 74 --statements 97 tests/*.integ.js"
}
}

View File

@@ -0,0 +1,182 @@
Public Key Info:
Public Key Algorithm: RSA
Key Security Level: High (3072 bits)
modulus:
00:be:92:be:df:de:0a:ab:38:fc:1a:c0:1a:58:4d:86
b8:1f:25:10:7d:19:05:17:bf:02:3d:e9:ef:f8:c0:04
5d:6f:98:de:5c:dd:c3:0f:e2:61:61:e4:b5:9c:42:ac
3e:af:fd:30:10:e1:54:32:66:75:f6:80:90:85:05:a0
6a:14:a2:6f:a7:2e:f0:f3:52:94:2a:f2:34:fc:0d:b4
fb:28:5d:1c:11:5c:59:6e:63:34:ba:b3:fd:73:b1:48
35:00:84:53:da:6a:9b:84:ab:64:b1:a1:2b:3a:d1:5a
d7:13:7c:12:2a:4e:72:e9:96:d6:30:74:c5:71:05:14
4b:2d:01:94:23:67:4e:37:3c:1e:c1:a0:bc:34:04:25
21:11:fb:4b:6b:53:74:8f:90:93:57:af:7f:3b:78:d6
a4:87:fe:7d:ed:20:11:8b:70:54:67:b8:c9:f5:c0:6b
de:4e:e7:a5:79:ff:f7:ad:cf:10:57:f5:51:70:7b:54
68:28:9e:b9:c2:10:7b:ab:aa:11:47:9f:ec:e6:2f:09
44:4a:88:5b:dd:8c:10:b4:c4:03:25:06:d9:e0:9f:a0
0d:cf:94:4b:3b:fa:a5:17:2c:e4:67:c4:17:6a:ab:d8
c8:7a:16:41:b9:91:b7:9c:ae:8c:94:be:26:61:51:71
c1:a6:39:39:97:75:28:a9:0e:21:ea:f0:bd:71:4a:8c
e1:f8:1d:a9:22:2f:10:a8:1b:e5:a4:9a:fd:0f:fa:c6
20:bc:96:99:79:c6:ba:a4:1f:3e:d4:91:c5:af:bb:71
0a:5a:ef:69:9c:64:69:ce:5a:fe:3f:c2:24:f4:26:d4
3d:ab:ab:9a:f0:f6:f1:b1:64:a9:f4:e2:34:6a:ab:2e
95:47:b9:07:5a:39:c6:95:9c:a9:e8:ed:71:dd:c1:21
16:c8:2d:4c:2c:af:06:9d:c6:fa:fe:c5:2a:6c:b4:c3
d5:96:fc:5e:fd:ec:1c:30:b4:9d:cb:29:ef:a8:50:1c
21:
public exponent:
01:00:01:
private exponent:
25:37:c5:7d:35:01:02:65:73:9e:c9:cb:9b:59:30:a9
3e:b3:df:5f:7f:06:66:97:d0:19:45:59:af:4b:d8:ce
62:a0:09:35:3b:bd:ff:99:27:89:95:bf:fe:0f:6b:52
26:ce:9c:97:7f:5a:11:29:bf:79:ef:ab:c9:be:ca:90
4d:0d:58:1e:df:65:01:30:2c:6d:a2:b5:c4:4f:ec:fb
6b:eb:9b:32:ac:c5:6e:70:83:78:be:f4:0d:a7:1e:c1
f3:22:e4:b9:70:3e:85:0f:6f:ef:dc:d8:f3:78:b5:73
f1:83:36:8c:fa:9b:28:91:63:ad:3c:f0:de:5c:ae:94
eb:ea:36:03:20:06:bf:74:c7:50:eb:52:36:1a:65:21
eb:40:17:7f:93:61:dd:33:d0:02:bc:ec:6d:31:f1:41
5a:a9:d1:f0:00:66:4c:c4:18:47:d5:67:e3:cd:bb:83
44:07:ab:62:83:21:dc:d8:e6:89:37:08:bb:9d:ea:62
c2:5d:ce:85:c2:dc:48:27:0c:a4:23:61:b7:30:e7:26
44:dc:1e:5c:2e:16:35:2b:2e:a6:e6:a4:ce:1f:9b:e9
fe:96:fa:49:1d:fb:2a:df:bc:bf:46:da:52:f8:37:8a
84:ab:e4:73:e6:46:56:b5:b4:3d:e1:63:eb:02:8e:d7
67:96:c4:dc:28:6d:6b:b6:0c:a3:0b:db:87:29:ad:f9
ec:73:b6:55:a3:40:32:13:84:c7:2f:33:74:04:dc:42
00:11:9c:fb:fc:62:35:b3:82:c3:3c:28:80:e8:09:a8
97:c7:c1:2e:3d:27:fa:4f:9b:fc:c2:34:58:41:5c:a1
e2:70:2e:2f:82:ad:bd:bd:8e:dd:23:12:25:de:89:70
60:75:48:90:80:ac:55:74:51:6f:49:9e:7f:63:41:8b
3c:b1:f5:c3:6b:4b:5a:50:a6:4d:38:e8:82:c2:04:c8
30:fd:06:9b:c1:04:27:b6:63:3a:5e:f5:4d:00:c3:d1
prime1:
00:f6:00:2e:7d:89:61:24:16:5e:87:ca:18:6c:03:b8
b4:33:df:4a:a7:7f:db:ed:39:15:41:12:61:4f:4e:b4
de:ab:29:d9:0c:6c:01:7e:53:2e:ee:e7:5f:a2:e4:6d
c6:4b:07:4e:d8:a3:ae:45:06:97:bd:18:a3:e9:dd:29
54:64:6d:f0:af:08:95:ae:ae:3e:71:63:76:2a:a1:18
c4:b1:fc:bc:3d:42:15:74:b3:c5:38:1f:5d:92:f1:b2
c6:3f:10:fe:35:1a:c6:b1:ce:70:38:ff:08:5c:de:61
79:c7:50:91:22:4d:e9:c8:18:49:e2:5c:91:84:86:e2
4d:0f:6e:9b:0d:81:df:aa:f3:59:75:56:e9:33:18:dd
ab:39:da:e2:25:01:05:a1:6e:23:59:15:2c:89:35:c7
ae:9c:c7:ea:88:9a:1a:f3:48:07:11:82:59:79:8c:62
53:06:37:30:14:b3:82:b1:50:fc:ae:b8:f7:1c:57:44
7d:
prime2:
00:c6:51:cc:dc:88:2e:cf:98:90:10:19:e0:d3:a4:d1
3f:dc:b0:29:d3:bb:26:ee:eb:00:17:17:d1:d1:bb:9b
34:b1:4e:af:b5:6c:1c:54:53:b4:bb:55:da:f7:78:cd
38:b4:2e:3a:8c:63:80:3b:64:9c:b4:2b:cd:dd:50:0b
05:d2:00:7a:df:8e:c3:e6:29:e0:9c:d8:40:b7:11:09
f4:38:df:f6:ed:93:1e:18:d4:93:fa:8d:ee:82:9c:0f
c1:88:26:84:9d:4f:ae:8a:17:d5:55:54:4c:c6:0a:ac
4d:ec:33:51:68:0f:4b:92:2e:04:57:fe:15:f5:00:46
5c:8e:ad:09:2c:e7:df:d5:36:7a:4e:bd:da:21:22:d7
58:b4:72:93:94:af:34:cc:e2:b8:d0:4f:0b:5d:97:08
12:19:17:34:c5:15:49:00:48:56:13:b8:45:4e:3b:f8
bc:d5:ab:d9:6d:c2:4a:cc:01:1a:53:4d:46:50:49:3b
75:
coefficient:
63:67:50:29:10:6a:85:a3:dc:51:90:20:76:86:8c:83
8e:d5:ff:aa:75:fd:b5:f8:31:b0:96:6c:18:1d:5b:ed
a4:2e:47:8d:9c:c2:1e:2c:a8:6d:4b:10:a5:c2:53:46
8a:9a:84:91:d7:fc:f5:cc:03:ce:b9:3d:5c:01:d2:27
99:7b:79:89:4f:a1:12:e3:05:5d:ee:10:f6:8c:e6:ce
5e:da:32:56:6d:6f:eb:32:b4:75:7b:94:49:d8:2d:9e
4d:19:59:2e:e4:0b:bc:95:df:df:65:67:a1:dd:c6:2b
99:f4:76:e8:9f:fa:57:1d:ca:f9:58:a9:ce:9b:30:5c
42:8a:ba:05:e7:e2:15:45:25:bc:e9:68:c1:8b:1a:37
cc:e1:aa:45:2e:94:f5:81:47:1e:64:7f:c0:c1:b7:a8
21:58:18:a9:a0:ed:e0:27:75:bf:65:81:6b:e4:1d:5a
b7:7e:df:d8:28:c6:36:21:19:c8:6e:da:ca:9e:da:84
exp1:
00:ba:d7:fe:77:a9:0d:98:2c:49:56:57:c0:5e:e2:20
ba:f6:1f:26:03:bc:d0:5d:08:9b:45:16:61:c4:ab:e2
22:b1:dc:92:17:a6:3d:28:26:a4:22:1e:a8:7b:ff:86
05:33:5d:74:9c:85:0d:cb:2d:ab:b8:9b:6b:7c:28:57
c8:da:92:ca:59:17:6b:21:07:05:34:78:37:fb:3e:ea
a2:13:12:04:23:7e:fa:ee:ed:cf:e0:c5:a9:fb:ff:0a
2b:1b:21:9c:02:d7:b8:8c:ba:60:70:59:fc:8f:14:f4
f2:5a:d9:ad:b2:61:7d:2c:56:8e:5f:98:b1:89:f8:2d
10:1c:a5:84:ad:28:b4:aa:92:34:a3:34:04:e1:a3:84
52:16:1a:52:e3:8a:38:2d:99:8a:cd:91:90:87:12:ca
fc:ab:e6:08:14:03:00:6f:41:88:e4:da:9d:7c:fd:8c
7c:c4:de:cb:ed:1d:3f:29:d0:7a:6b:76:df:71:ae:32
bd:
exp2:
4a:e9:d3:6c:ea:b4:64:0e:c9:3c:8b:c9:f5:a8:a8:b2
6a:f6:d0:95:fe:78:32:7f:ea:c4:ce:66:9f:c7:32:55
b1:34:7c:03:18:17:8b:73:23:2e:30:bc:4a:07:03:de
8b:91:7a:e4:55:21:b7:4d:c6:33:f8:e8:06:d5:99:94
55:43:81:26:b9:93:1e:7a:6b:32:54:2d:fd:f9:1d:bd
77:4e:82:c4:33:72:87:06:a5:ef:5b:75:e1:38:7a:6b
2c:b7:00:19:3c:64:3e:1d:ca:a4:34:f7:db:47:64:d6
fa:86:58:15:ea:d1:2d:22:dc:d9:30:4d:b3:02:ab:91
83:03:b2:17:98:6f:60:e6:f7:44:8f:4a:ba:81:a2:bf
0b:4a:cc:9c:b9:a2:44:52:d0:65:3f:b6:97:5f:d9:d8
9c:49:bb:d1:46:bd:10:b2:42:71:a8:85:e5:8b:99:e6
1b:00:93:5d:76:ab:32:6c:a8:39:17:53:9c:38:4d:91
Public Key PIN:
pin-sha256:ISh/UeFjUG5Gwrpx6hMUGQPvg9wOKjOkHmRbs4YjZqs=
Public Key ID:
sha256:21287f51e163506e46c2ba71ea13141903ef83dc0e2a33a41e645bb3862366ab
sha1:1a48455111ac45fb5807c5cdb7b20b896c52f0b6
-----BEGIN RSA PRIVATE KEY-----
MIIG4wIBAAKCAYEAvpK+394Kqzj8GsAaWE2GuB8lEH0ZBRe/Aj3p7/jABF1vmN5c
3cMP4mFh5LWcQqw+r/0wEOFUMmZ19oCQhQWgahSib6cu8PNSlCryNPwNtPsoXRwR
XFluYzS6s/1zsUg1AIRT2mqbhKtksaErOtFa1xN8EipOcumW1jB0xXEFFEstAZQj
Z043PB7BoLw0BCUhEftLa1N0j5CTV69/O3jWpIf+fe0gEYtwVGe4yfXAa95O56V5
//etzxBX9VFwe1RoKJ65whB7q6oRR5/s5i8JREqIW92MELTEAyUG2eCfoA3PlEs7
+qUXLORnxBdqq9jIehZBuZG3nK6MlL4mYVFxwaY5OZd1KKkOIerwvXFKjOH4Haki
LxCoG+Wkmv0P+sYgvJaZeca6pB8+1JHFr7txClrvaZxkac5a/j/CJPQm1D2rq5rw
9vGxZKn04jRqqy6VR7kHWjnGlZyp6O1x3cEhFsgtTCyvBp3G+v7FKmy0w9WW/F79
7BwwtJ3LKe+oUBwhAgMBAAECggGAJTfFfTUBAmVznsnLm1kwqT6z319/BmaX0BlF
Wa9L2M5ioAk1O73/mSeJlb/+D2tSJs6cl39aESm/ee+ryb7KkE0NWB7fZQEwLG2i
tcRP7Ptr65syrMVucIN4vvQNpx7B8yLkuXA+hQ9v79zY83i1c/GDNoz6myiRY608
8N5crpTr6jYDIAa/dMdQ61I2GmUh60AXf5Nh3TPQArzsbTHxQVqp0fAAZkzEGEfV
Z+PNu4NEB6tigyHc2OaJNwi7nepiwl3OhcLcSCcMpCNhtzDnJkTcHlwuFjUrLqbm
pM4fm+n+lvpJHfsq37y/RtpS+DeKhKvkc+ZGVrW0PeFj6wKO12eWxNwobWu2DKML
24cprfnsc7ZVo0AyE4THLzN0BNxCABGc+/xiNbOCwzwogOgJqJfHwS49J/pPm/zC
NFhBXKHicC4vgq29vY7dIxIl3olwYHVIkICsVXRRb0mef2NBizyx9cNrS1pQpk04
6ILCBMgw/QabwQQntmM6XvVNAMPRAoHBAPYALn2JYSQWXofKGGwDuLQz30qnf9vt
ORVBEmFPTrTeqynZDGwBflMu7udfouRtxksHTtijrkUGl70Yo+ndKVRkbfCvCJWu
rj5xY3YqoRjEsfy8PUIVdLPFOB9dkvGyxj8Q/jUaxrHOcDj/CFzeYXnHUJEiTenI
GEniXJGEhuJND26bDYHfqvNZdVbpMxjdqzna4iUBBaFuI1kVLIk1x66cx+qImhrz
SAcRgll5jGJTBjcwFLOCsVD8rrj3HFdEfQKBwQDGUczciC7PmJAQGeDTpNE/3LAp
07sm7usAFxfR0bubNLFOr7VsHFRTtLtV2vd4zTi0LjqMY4A7ZJy0K83dUAsF0gB6
347D5ingnNhAtxEJ9Djf9u2THhjUk/qN7oKcD8GIJoSdT66KF9VVVEzGCqxN7DNR
aA9Lki4EV/4V9QBGXI6tCSzn39U2ek692iEi11i0cpOUrzTM4rjQTwtdlwgSGRc0
xRVJAEhWE7hFTjv4vNWr2W3CSswBGlNNRlBJO3UCgcEAutf+d6kNmCxJVlfAXuIg
uvYfJgO80F0Im0UWYcSr4iKx3JIXpj0oJqQiHqh7/4YFM110nIUNyy2ruJtrfChX
yNqSylkXayEHBTR4N/s+6qITEgQjfvru7c/gxan7/worGyGcAte4jLpgcFn8jxT0
8lrZrbJhfSxWjl+YsYn4LRAcpYStKLSqkjSjNATho4RSFhpS44o4LZmKzZGQhxLK
/KvmCBQDAG9BiOTanXz9jHzE3svtHT8p0Hprdt9xrjK9AoHASunTbOq0ZA7JPIvJ
9aiosmr20JX+eDJ/6sTOZp/HMlWxNHwDGBeLcyMuMLxKBwPei5F65FUht03GM/jo
BtWZlFVDgSa5kx56azJULf35Hb13ToLEM3KHBqXvW3XhOHprLLcAGTxkPh3KpDT3
20dk1vqGWBXq0S0i3NkwTbMCq5GDA7IXmG9g5vdEj0q6gaK/C0rMnLmiRFLQZT+2
l1/Z2JxJu9FGvRCyQnGoheWLmeYbAJNddqsybKg5F1OcOE2RAoHAY2dQKRBqhaPc
UZAgdoaMg47V/6p1/bX4MbCWbBgdW+2kLkeNnMIeLKhtSxClwlNGipqEkdf89cwD
zrk9XAHSJ5l7eYlPoRLjBV3uEPaM5s5e2jJWbW/rMrR1e5RJ2C2eTRlZLuQLvJXf
32Vnod3GK5n0duif+lcdyvlYqc6bMFxCiroF5+IVRSW86WjBixo3zOGqRS6U9YFH
HmR/wMG3qCFYGKmg7eAndb9lgWvkHVq3ft/YKMY2IRnIbtrKntqE
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,169 @@
'use strict'
const NbdClient = require('../index.js')
const { spawn, exec } = require('node:child_process')
const fs = require('node:fs/promises')
const { test } = require('tap')
const tmp = require('tmp')
const { pFromCallback } = require('promise-toolbox')
const { Socket } = require('node:net')
const { NBD_DEFAULT_PORT } = require('../constants.js')
const assert = require('node:assert')
const FILE_SIZE = 10 * 1024 * 1024
async function createTempFile(size) {
const tmpPath = await pFromCallback(cb => tmp.file(cb))
const data = Buffer.alloc(size, 0)
for (let i = 0; i < size; i += 4) {
data.writeUInt32BE(i, i)
}
await fs.writeFile(tmpPath, data)
return tmpPath
}
async function spawnNbdKit(path) {
let tries = 5
// wait for server to be ready
const nbdServer = spawn(
'nbdkit',
[
'file',
path,
'--newstyle', //
'--exit-with-parent',
'--read-only',
'--export-name=MY_SECRET_EXPORT',
'--tls=on',
'--tls-certificates=./tests/',
// '--tls-verify-peer',
// '--verbose',
'--exit-with-parent',
],
{
stdio: ['inherit', 'inherit', 'inherit'],
}
)
nbdServer.on('error', err => {
console.error(err)
})
do {
try {
const socket = new Socket()
await new Promise((resolve, reject) => {
socket.connect(NBD_DEFAULT_PORT, 'localhost')
socket.once('error', reject)
socket.once('connect', resolve)
})
socket.destroy()
break
} catch (err) {
tries--
if (tries <= 0) {
throw err
} else {
await new Promise(resolve => setTimeout(resolve, 1000))
}
}
} while (true)
return nbdServer
}
async function killNbdKit() {
return new Promise((resolve, reject) =>
exec('pkill -9 -f -o nbdkit', err => {
err ? reject(err) : resolve()
})
)
}
test('it works with unsecured network', async tap => {
const path = await createTempFile(FILE_SIZE)
let nbdServer = await spawnNbdKit(path)
const client = new NbdClient(
{
address: '127.0.0.1',
exportname: 'MY_SECRET_EXPORT',
cert: `-----BEGIN CERTIFICATE-----
MIIDazCCAlOgAwIBAgIUeHpQ0IeD6BmP2zgsv3LV3J4BI/EwDQYJKoZIhvcNAQEL
BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMzA1MTcxMzU1MzBaFw0yNDA1
MTYxMzU1MzBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC/8wLopj/iZY6ijmpvgCJsl+zY0hQZQcIoaCs0H75u
8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZolevaSJLNT2Iolscvc2W9NCF4N1V6y
zs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh67u+uI40732AfQqD01BNCTD/uHRB
lKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y2SJVTeT4a1sSJixl6I1YPmt80FJh
gq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULwdJOGgmqGRDzgZKJS5UUpxe/ViEO4
59I18vIkgibaRYhENgmnP3lIzTOLlUe07tbSML5RGBbBAgMBAAGjUzBRMB0GA1Ud
DgQWBBR/8+zYoL0H0LdWfULHg1LynFdSbzAfBgNVHSMEGDAWgBR/8+zYoL0H0LdW
fULHg1LynFdSbzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBD
OF5bTmbDEGoZ6OuQaI0vyya/T4FeaoWmh22gLeL6dEEmUVGJ1NyMTOvG9GiGJ8OM
QhD1uHJei45/bXOYIDGey2+LwLWye7T4vtRFhf8amYh0ReyP/NV4/JoR/U3pTSH6
tns7GZ4YWdwUhvOOlm17EQKVO/hP3t9mp74gcjdL4bCe5MYSheKuNACAakC1OR0U
ZakJMP9ijvQuq8spfCzrK+NbHKNHR9tEgQw+ax/t1Au4dGVtFbcoxqCrx2kTl0RP
CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
57ABhfYpMlZ4aHjuN1bL
-----END CERTIFICATE-----
`,
},
{
readAhead: 2,
}
)
await client.connect()
tap.equal(client.exportSize, BigInt(FILE_SIZE))
const CHUNK_SIZE = 1024 * 1024 // non default size
const indexes = []
for (let i = 0; i < FILE_SIZE / CHUNK_SIZE; i++) {
indexes.push(i)
}
const nbdIterator = client.readBlocks(function* () {
for (const index of indexes) {
yield { index, size: CHUNK_SIZE }
}
})
let i = 0
for await (const block of nbdIterator) {
let blockOk = true
let firstFail
for (let j = 0; j < CHUNK_SIZE; j += 4) {
const wanted = i * CHUNK_SIZE + j
const found = block.readUInt32BE(j)
blockOk = blockOk && found === wanted
if (!blockOk && firstFail === undefined) {
firstFail = j
}
}
tap.ok(blockOk, `check block ${i} content`)
i++
// flaky server is flaky
if (i % 7 === 0) {
// kill the older nbdkit process
await killNbdKit()
nbdServer = await spawnNbdKit(path)
}
}
// we can reuse the conneciton to read other blocks
// default iterator
const nbdIteratorWithDefaultBlockIterator = client.readBlocks()
let nb = 0
for await (const block of nbdIteratorWithDefaultBlockIterator) {
nb++
tap.equal(block.length, 2 * 1024 * 1024)
}
tap.equal(nb, 5)
assert.rejects(() => client.readBlock(100, CHUNK_SIZE))
await client.disconnect()
// double disconnection shouldn't pose any problem
await client.disconnect()
nbdServer.kill()
await fs.unlink(path)
})

View File

@@ -0,0 +1,21 @@
-----BEGIN CERTIFICATE-----
MIIDazCCAlOgAwIBAgIUeHpQ0IeD6BmP2zgsv3LV3J4BI/EwDQYJKoZIhvcNAQEL
BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMzA1MTcxMzU1MzBaFw0yNDA1
MTYxMzU1MzBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC/8wLopj/iZY6ijmpvgCJsl+zY0hQZQcIoaCs0H75u
8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZolevaSJLNT2Iolscvc2W9NCF4N1V6y
zs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh67u+uI40732AfQqD01BNCTD/uHRB
lKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y2SJVTeT4a1sSJixl6I1YPmt80FJh
gq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULwdJOGgmqGRDzgZKJS5UUpxe/ViEO4
59I18vIkgibaRYhENgmnP3lIzTOLlUe07tbSML5RGBbBAgMBAAGjUzBRMB0GA1Ud
DgQWBBR/8+zYoL0H0LdWfULHg1LynFdSbzAfBgNVHSMEGDAWgBR/8+zYoL0H0LdW
fULHg1LynFdSbzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBD
OF5bTmbDEGoZ6OuQaI0vyya/T4FeaoWmh22gLeL6dEEmUVGJ1NyMTOvG9GiGJ8OM
QhD1uHJei45/bXOYIDGey2+LwLWye7T4vtRFhf8amYh0ReyP/NV4/JoR/U3pTSH6
tns7GZ4YWdwUhvOOlm17EQKVO/hP3t9mp74gcjdL4bCe5MYSheKuNACAakC1OR0U
ZakJMP9ijvQuq8spfCzrK+NbHKNHR9tEgQw+ax/t1Au4dGVtFbcoxqCrx2kTl0RP
CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
57ABhfYpMlZ4aHjuN1bL
-----END CERTIFICATE-----

View File

@@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC/8wLopj/iZY6i
jmpvgCJsl+zY0hQZQcIoaCs0H75u8PPSzHedtOLURAkJeMmIS40UY/eIvHh7yZol
evaSJLNT2Iolscvc2W9NCF4N1V6yzs4pDzP+YPF7Q8ldNaQIX0bAk4PfaMSM+pLh
67u+uI40732AfQqD01BNCTD/uHRBlKnQuqQpe9UM9UzRRVejpu1r19D4dJruAm6y
2SJVTeT4a1sSJixl6I1YPmt80FJhgq9O2KRGbXp1xIjemWgW99MHg63pTgxEiULw
dJOGgmqGRDzgZKJS5UUpxe/ViEO459I18vIkgibaRYhENgmnP3lIzTOLlUe07tbS
ML5RGBbBAgMBAAECggEATLYiafcTHfgnZmjTOad0WoDnC4n9tVBV948WARlUooLS
duL3RQRHCLz9/ZaTuFA1XDpNcYyc/B/IZoU7aJGZR3+JSmJBjowpUphu+klVNNG4
i6lDRrzYlUI0hfdLjHsDTDBIKi91KcB0lix/VkvsrVQvDHwsiR2ZAIiVWAWQFKrR
5O3DhSTHbqyq47uR58rWr4Zf3zvZaUl841AS1yELzCiZqz7AenvyWphim0c0XA5d
I63CEShntHnEAA9OMcP8+BNf/3AmqB4welY+m8elB3aJNH+j7DKq/AWqaM5nl2PC
cS6qgpxwOyTxEOyj1xhwK5ZMRR3heW3NfutIxSOPlwKBgQDB9ZkrBeeGVtCISO7C
eCANzSLpeVrahTvaCSQLdPHsLRLDUc+5mxdpi3CaRlzYs3S1OWdAtyWX9mBryltF
qDPhCNjFDyHok4D3wLEWdS9oUVwEKUM8fOPW3tXLLiMM7p4862Qo7LqnqHzPqsnz
22iZo5yjcc7aLJ+VmFrbAowwOwKBgQD9WNCvczTd7Ymn7zEvdiAyNoS0OZ0orwEJ
zGaxtjqVguGklNfrb/UB+eKNGE80+YnMiSaFc9IQPetLntZdV0L7kWYdCI8kGDNA
DbVRCOp+z8DwAojlrb/zsYu23anQozT3WeHxVU66lNuyEQvSW2tJa8gN1htrD7uY
5KLibYrBMwKBgEM0iiHyJcrSgeb2/mO7o7+keJhVSDm3OInP6QFfQAQJihrLWiKB
rpcPjbCm+LzNUX8JqNEvpIMHB1nR/9Ye9frfSdzd5W3kzicKSVHywL5wkmWOtpFa
5Mcq5wFDtzlf5MxO86GKhRJauwRptRgdyhySKFApuva1x4XaCIEiXNjJAoGBAN82
t3c+HCBEv3o05rMYcrmLC1T3Rh6oQlPtwbVmByvfywsFEVCgrc/16MPD3VWhXuXV
GRmPuE8THxLbead30M5xhvShq+xzXgRbj5s8Lc9ZIHbW5OLoOS1vCtgtaQcoJOyi
Rs4pCVqe+QpktnO6lEZ2Libys+maTQEiwNibBxu9AoGAUG1V5aKMoXa7pmGeuFR6
ES+1NDiCt6yDq9BsLZ+e2uqvWTkvTGLLwvH6xf9a0pnnILd0AUTKAAaoUdZS6++E
cGob7fxMwEE+UETp0QBgLtfjtExMOFwr2avw8PV4CYEUkPUAm2OFB2Twh+d/PNfr
FAxF1rN47SBPNbFI8N4TFsg=
-----END PRIVATE KEY-----

View File

@@ -2,10 +2,8 @@
import { Task } from '@vates/task'
const task = new Task({
// data in this object will be sent along the *start* event
//
// property names should be chosen as not to clash with properties used by `Task` or `combineEvents`
data: {
// this object will be sent in the *start* event
properties: {
name: 'my task',
},
@@ -16,13 +14,15 @@ const task = new Task({
// this function is called each time this task or one of it's subtasks change state
const { id, timestamp, type } = event
if (type === 'start') {
const { name, parentId } = event
const { name, parentId, properties } = event
} else if (type === 'end') {
const { result, status } = event
} else if (type === 'info' || type === 'warning') {
const { data, message } = event
} else if (type === 'property') {
const { name, value } = event
} else if (type === 'abortionRequested') {
const { reason } = event
}
},
})
@@ -36,7 +36,6 @@ task.id
// - pending
// - success
// - failure
// - aborted
task.status
// Triggers the abort signal associated to the task.
@@ -89,6 +88,30 @@ const onProgress = makeOnProgress({
onRootTaskStart(taskLog) {
// `taskLog` is an object reflecting the state of this task and all its subtasks,
// and will be mutated in real-time to reflect the changes of the task.
// timestamp at which the task started
taskLog.start
// current status of the task as described in the previous section
taskLog.status
// undefined or a dictionary of properties attached to the task
taskLog.properties
// timestamp at which the abortion was requested, undefined otherwise
taskLog.abortionRequestedAt
// undefined or an array of infos emitted on the task
taskLog.infos
// undefined or an array of warnings emitted on the task
taskLog.warnings
// timestamp at which the task ended, undefined otherwise
taskLog.end
// undefined or the result value of the task
taskLog.result
},
// This function is called each time a root task ends.

View File

@@ -18,10 +18,8 @@ npm install --save @vates/task
import { Task } from '@vates/task'
const task = new Task({
// data in this object will be sent along the *start* event
//
// property names should be chosen as not to clash with properties used by `Task` or `combineEvents`
data: {
// this object will be sent in the *start* event
properties: {
name: 'my task',
},
@@ -32,13 +30,15 @@ const task = new Task({
// this function is called each time this task or one of it's subtasks change state
const { id, timestamp, type } = event
if (type === 'start') {
const { name, parentId } = event
const { name, parentId, properties } = event
} else if (type === 'end') {
const { result, status } = event
} else if (type === 'info' || type === 'warning') {
const { data, message } = event
} else if (type === 'property') {
const { name, value } = event
} else if (type === 'abortionRequested') {
const { reason } = event
}
},
})
@@ -52,7 +52,6 @@ task.id
// - pending
// - success
// - failure
// - aborted
task.status
// Triggers the abort signal associated to the task.
@@ -105,6 +104,30 @@ const onProgress = makeOnProgress({
onRootTaskStart(taskLog) {
// `taskLog` is an object reflecting the state of this task and all its subtasks,
// and will be mutated in real-time to reflect the changes of the task.
// timestamp at which the task started
taskLog.start
// current status of the task as described in the previous section
taskLog.status
// undefined or a dictionnary of properties attached to the task
taskLog.properties
// timestamp at which the abortion was requested, undefined otherwise
taskLog.abortionRequestedAt
// undefined or an array of infos emitted on the task
taskLog.infos
// undefined or an array of warnings emitted on the task
taskLog.warnings
// timestamp at which the task ended, undefined otherwise
taskLog.end
// undefined or the result value of the task
taskLog.result
},
// This function is called each time a root task ends.

View File

@@ -4,36 +4,18 @@ const assert = require('node:assert').strict
const noop = Function.prototype
function omit(source, keys, target = { __proto__: null }) {
for (const key of Object.keys(source)) {
if (!keys.has(key)) {
target[key] = source[key]
}
}
return target
}
const IGNORED_START_PROPS = new Set([
'end',
'infos',
'properties',
'result',
'status',
'tasks',
'timestamp',
'type',
'warnings',
])
exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noop, onTaskUpdate = noop }) {
const taskLogs = new Map()
return function onProgress(event) {
const { id, type } = event
let taskLog
if (type === 'start') {
taskLog = omit(event, IGNORED_START_PROPS)
taskLog.start = event.timestamp
taskLog.status = 'pending'
taskLog = {
id,
properties: { __proto__: null, ...event.properties },
start: event.timestamp,
status: 'pending',
}
taskLogs.set(id, taskLog)
const { parentId } = event
@@ -48,7 +30,7 @@ exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noo
assert.notEqual(parent, undefined)
// inject a (non-enumerable) reference to the parent and the root task
Object.defineProperty(taskLog, { $parent: { value: parent }, $root: { value: parent.$root } })
Object.defineProperties(taskLog, { $parent: { value: parent }, $root: { value: parent.$root } })
;(parent.tasks ?? (parent.tasks = [])).push(taskLog)
}
} else {
@@ -65,6 +47,8 @@ exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noo
taskLog.end = event.timestamp
taskLog.result = event.result
taskLog.status = event.status
} else if (type === 'abortionRequested') {
taskLog.abortionRequestedAt = event.timestamp
}
if (type === 'end' && taskLog.$root === taskLog) {

View File

@@ -0,0 +1,81 @@
'use strict'
const assert = require('node:assert').strict
const { describe, it } = require('test')
const { makeOnProgress } = require('./combineEvents.js')
const { Task } = require('./index.js')
describe('makeOnProgress()', function () {
it('works', async function () {
const events = []
let log
const task = new Task({
properties: { name: 'task' },
onProgress: makeOnProgress({
onRootTaskStart(log_) {
assert.equal(log, undefined)
log = log_
events.push('onRootTaskStart')
},
onRootTaskEnd(log_) {
assert.equal(log_, log)
events.push('onRootTaskEnd')
},
onTaskUpdate(log_) {
assert.equal(log_.$root, log)
events.push('onTaskUpdate')
},
}),
})
assert.equal(events.length, 0)
let i = 0
await task.run(async () => {
assert.equal(events[i++], 'onRootTaskStart')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.id, task.id)
assert.equal(log.properties.name, 'task')
assert(Math.abs(log.start - Date.now()) < 10)
Task.set('name', 'new name')
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.name, 'new name')
Task.set('progress', 0)
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.progress, 0)
Task.info('foo', {})
assert.equal(events[i++], 'onTaskUpdate')
assert.deepEqual(log.infos, [{ data: {}, message: 'foo' }])
const subtask = new Task({ properties: { name: 'subtask' } })
await subtask.run(() => {
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.tasks[0].properties.name, 'subtask')
Task.warning('bar', {})
assert.equal(events[i++], 'onTaskUpdate')
assert.deepEqual(log.tasks[0].warnings, [{ data: {}, message: 'bar' }])
subtask.abort()
assert.equal(events[i++], 'onTaskUpdate')
assert(Math.abs(log.tasks[0].abortionRequestedAt - Date.now()) < 10)
})
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.tasks[0].status, 'success')
Task.set('progress', 100)
assert.equal(events[i++], 'onTaskUpdate')
assert.equal(log.properties.progress, 100)
})
assert.equal(events[i++], 'onRootTaskEnd')
assert.equal(events[i++], 'onTaskUpdate')
assert(Math.abs(log.end - Date.now()) < 10)
assert.equal(log.status, 'success')
})
})

View File

@@ -10,11 +10,10 @@ function define(object, property, value) {
const noop = Function.prototype
const ABORTED = 'aborted'
const FAILURE = 'failure'
const PENDING = 'pending'
const SUCCESS = 'success'
exports.STATUS = { ABORTED, FAILURE, PENDING, SUCCESS }
exports.STATUS = { FAILURE, PENDING, SUCCESS }
// stored in the global context so that various versions of the library can interact.
const asyncStorageKey = '@vates/task@0'
@@ -83,8 +82,8 @@ exports.Task = class Task {
return this.#status
}
constructor({ data = {}, onProgress } = {}) {
this.#startData = data
constructor({ properties, onProgress } = {}) {
this.#startData = { properties }
if (onProgress !== undefined) {
this.#onProgress = onProgress
@@ -105,12 +104,16 @@ exports.Task = class Task {
const { signal } = this.#abortController
signal.addEventListener('abort', () => {
if (this.status === PENDING && !this.#running) {
if (this.status === PENDING) {
this.#maybeStart()
const status = ABORTED
this.#status = status
this.#emit('end', { result: signal.reason, status })
this.#emit('abortionRequested', { reason: signal.reason })
if (!this.#running) {
const status = FAILURE
this.#status = status
this.#emit('end', { result: signal.reason, status })
}
}
})
}
@@ -156,9 +159,7 @@ exports.Task = class Task {
this.#running = false
return result
} catch (result) {
const { signal } = this.#abortController
const aborted = signal.aborted && result === signal.reason
const status = aborted ? ABORTED : FAILURE
const status = FAILURE
this.#status = status
this.#emit('end', { status, result })

View File

@@ -15,7 +15,7 @@ function assertEvent(task, expected, eventIndex = -1) {
assert.equal(typeof actual.id, 'string')
assert.equal(typeof actual.timestamp, 'number')
for (const keys of Object.keys(expected)) {
assert.equal(actual[keys], expected[keys])
assert.deepEqual(actual[keys], expected[keys])
}
}
@@ -30,10 +30,10 @@ function createTask(opts) {
describe('Task', function () {
describe('contructor', function () {
it('data properties are passed to the start event', async function () {
const data = { foo: 0, bar: 1 }
const task = createTask({ data })
const properties = { foo: 0, bar: 1 }
const task = createTask({ properties })
await task.run(noop)
assertEvent(task, { ...data, type: 'start' }, 0)
assertEvent(task, { type: 'start', properties }, 0)
})
})
@@ -79,20 +79,22 @@ describe('Task', function () {
})
.catch(noop)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result: reason }, 2)
})
it('does not abort if the task fails without the abort reason', async function () {
const task = createTask()
const reason = {}
const result = new Error()
await task
.run(() => {
task.abort({})
task.abort(reason)
throw result
})
@@ -100,18 +102,20 @@ describe('Task', function () {
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'failure', result }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result }, 2)
})
it('does not abort if the task succeed', async function () {
const task = createTask()
const reason = {}
const result = {}
await task
.run(() => {
task.abort({})
task.abort(reason)
return result
})
@@ -119,9 +123,10 @@ describe('Task', function () {
assert.equal(task.status, 'success')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'success', result }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'success', result }, 2)
})
it('aborts before task is running', function () {
@@ -130,11 +135,12 @@ describe('Task', function () {
task.abort(reason)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assert.equal(task.$events.length, 3)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
assertEvent(task, { type: 'abortionRequested', reason }, 1)
assertEvent(task, { type: 'end', status: 'failure', result: reason }, 2)
})
})
@@ -243,7 +249,7 @@ describe('Task', function () {
assert.equal(task.status, 'failure')
})
it('changes to aborted after run is complete', async function () {
it('changes to failure if aborted after run is complete', async function () {
const task = createTask()
await task
.run(() => {
@@ -252,13 +258,13 @@ describe('Task', function () {
Task.abortSignal.throwIfAborted()
})
.catch(noop)
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
})
it('changes to aborted if aborted when not running', async function () {
it('changes to failure if aborted when not running', function () {
const task = createTask()
task.abort()
assert.equal(task.status, 'aborted')
assert.equal(task.status, 'failure')
})
})

View File

@@ -13,7 +13,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.1",
"version": "0.1.2",
"engines": {
"node": ">=14"
},

View File

@@ -7,8 +7,8 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.36.0",
"@xen-orchestra/fs": "^3.3.4",
"@xen-orchestra/backups": "^0.38.2",
"@xen-orchestra/fs": "^4.0.0",
"filenamify": "^4.1.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
@@ -27,7 +27,7 @@
"scripts": {
"postversion": "npm publish --access public"
},
"version": "1.0.6",
"version": "1.0.8",
"license": "AGPL-3.0-or-later",
"author": {
"name": "Vates SAS",

View File

@@ -1,307 +1,19 @@
'use strict'
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const pTimeout = require('promise-toolbox/timeout')
const { compileTemplate } = require('@xen-orchestra/template')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { Metadata } = require('./_runners/Metadata.js')
const { VmsRemote } = require('./_runners/VmsRemote.js')
const { VmsXapi } = require('./_runners/VmsXapi.js')
const { extractIdsFromSimplePattern } = require('./extractIdsFromSimplePattern.js')
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
const { Task } = require('./Task.js')
const { VmBackup } = require('./_VmBackup.js')
const { XoMetadataBackup } = require('./_XoMetadataBackup.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const noop = Function.prototype
const getAdaptersByRemote = adapters => {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
const DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
const DEFAULT_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}
exports.Backup = class Backup {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const { type } = job
const baseSettings = { ...DEFAULT_SETTINGS }
if (type === 'backup') {
Object.assign(baseSettings, DEFAULT_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
this.run = this._runVmBackup
} else if (type === 'metadataBackup') {
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
this.run = this._runMetadataBackup
} else {
exports.createRunner = function createRunner(opts) {
const { type } = opts.job
switch (type) {
case 'backup':
return new VmsXapi(opts)
case 'mirrorBackup':
return new VmsRemote(opts)
case 'metadataBackup':
return new Metadata(opts)
default:
throw new Error(`No runner for the backup type ${type}`)
}
Object.assign(baseSettings, job.settings[''])
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
async _runMetadataBackup() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
async _runVmBackup() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () =>
new VmBackup({
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}).run()
)
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -3,14 +3,14 @@
const assert = require('assert')
const { formatFilenameDate } = require('./_filenameDate.js')
const { importDeltaVm } = require('./_deltaVm.js')
const { importIncrementalVm } = require('./_incrementalVm.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
exports.ImportVmBackup = class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
this._adapter = adapter
this._importDeltaVmSettings = { newMacAddresses, mapVdisSrs }
this._importIncrementalVmSettings = { newMacAddresses, mapVdisSrs }
this._metadata = metadata
this._srUuid = srUuid
this._xapi = xapi
@@ -31,11 +31,11 @@ exports.ImportVmBackup = class ImportVmBackup {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(this._importDeltaVmSettings.mapVdisSrs)
Object.entries(this._importIncrementalVmSettings.mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await adapter.readDeltaVmBackup(metadata, ignoredVdis)
backup = await adapter.readIncrementalVmBackup(metadata, ignoredVdis)
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}
@@ -49,8 +49,8 @@ exports.ImportVmBackup = class ImportVmBackup {
const vmRef = isFull
? await xapi.VM_import(backup, srRef)
: await importDeltaVm(backup, await xapi.getRecord('SR', srRef), {
...this._importDeltaVmSettings,
: await importIncrementalVm(backup, await xapi.getRecord('SR', srRef), {
...this._importIncrementalVmSettings,
detectBase: false,
})

View File

@@ -333,7 +333,7 @@ class RemoteAdapter {
const RE_VHDI = /^vhdi(\d+)$/
const handler = this._handler
const diskPath = handler._getFilePath('/' + diskId)
const diskPath = handler.getFilePath('/' + diskId)
const mountDir = yield getTmpDir()
await fromCallback(execFile, 'vhdimount', [diskPath, mountDir])
try {
@@ -404,20 +404,27 @@ class RemoteAdapter {
return `${baseName}.vhd`
}
async listAllVmBackups() {
async listAllVms() {
const handler = this._handler
const backups = { __proto__: null }
await asyncMap(await handler.list(BACKUP_DIR), async entry => {
const vmsUuids = []
await asyncEach(await handler.list(BACKUP_DIR), async entry => {
// ignore hidden and lock files
if (entry[0] !== '.' && !entry.endsWith('.lock')) {
const vmBackups = await this.listVmBackups(entry)
if (vmBackups.length !== 0) {
backups[entry] = vmBackups
}
vmsUuids.push(entry)
}
})
return vmsUuids
}
async listAllVmBackups() {
const vmsUuids = await this.listAllVms()
const backups = { __proto__: null }
await asyncEach(vmsUuids, async vmUuid => {
const vmBackups = await this.listVmBackups(vmUuid)
if (vmBackups.length !== 0) {
backups[vmUuid] = vmBackups
}
})
return backups
}
@@ -691,8 +698,8 @@ class RemoteAdapter {
}
// open the hierarchy of ancestors until we find a full one
async _createSyntheticStream(handler, path) {
const disposableSynthetic = await VhdSynthetic.fromVhdChain(handler, path)
async _createVhdStream(handler, path, { useChain }) {
const disposableSynthetic = useChain ? await VhdSynthetic.fromVhdChain(handler, path) : await openVhd(handler, path)
// I don't want the vhds to be disposed on return
// but only when the stream is done ( or failed )
@@ -717,7 +724,7 @@ class RemoteAdapter {
return stream
}
async readDeltaVmBackup(metadata, ignoredVdis) {
async readIncrementalVmBackup(metadata, ignoredVdis, { useChain = true } = {}) {
const handler = this._handler
const { vbds, vhds, vifs, vm, vmSnapshot } = metadata
const dir = dirname(metadata._filename)
@@ -725,7 +732,7 @@ class RemoteAdapter {
const streams = {}
await asyncMapSettled(Object.keys(vdis), async ref => {
streams[`${ref}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[ref]))
streams[`${ref}.vhd`] = await this._createVhdStream(handler, join(dir, vhds[ref]), { useChain })
})
return {

View File

@@ -1,7 +1,7 @@
'use strict'
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { PATH_DB_DUMP } = require('./_PoolMetadataBackup.js')
const { PATH_DB_DUMP } = require('./_runners/_PoolMetadataBackup.js')
exports.RestoreMetadataBackup = class RestoreMetadataBackup {
constructor({ backupId, handler, xapi }) {

View File

@@ -1,515 +0,0 @@
'use strict'
const assert = require('assert')
const findLast = require('lodash/findLast.js')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const vhdStreamValidator = require('vhd-lib/vhdStreamValidator.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { pipeline } = require('node:stream')
const { DeltaBackupWriter } = require('./writers/DeltaBackupWriter.js')
const { DeltaReplicationWriter } = require('./writers/DeltaReplicationWriter.js')
const { exportDeltaVm } = require('./_deltaVm.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { FullBackupWriter } = require('./writers/FullBackupWriter.js')
const { FullReplicationWriter } = require('./writers/FullReplicationWriter.js')
const { getOldEntries } = require('./_getOldEntries.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
const { debug, warn } = createLogger('xo:backups:VmBackup')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
const forkDeltaExport = deltaExport =>
Object.create(deltaExport, {
streams: {
value: mapValues(deltaExport.streams, forkStreamUnpipe),
},
})
const noop = Function.prototype
class VmBackup {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
this.vm = vm
const { tags } = this.vm
// VM (snapshot) that is really exported
this.exportedVm = undefined
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isDelta = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._isDelta
? [DeltaBackupWriter, DeltaReplicationWriter]
: [FullBackupWriter, FullReplicationWriter]
const allSettings = job.settings
Object.keys(remoteAdapters).forEach(remoteId => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(new BackupWriter({ backup: this, remoteId, settings: targetSettings }))
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(new ReplicationWriter({ backup: this, sr, settings: targetSettings }))
}
})
}
}
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const { vm } = this
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const { vm } = this
const xapi = this._xapi
const settings = this._settings
const doSnapshot =
settings.unconditionalSnapshot ||
this._isDelta ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
if (doSnapshot) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this.exportedVm = await xapi.getRecord('VM', snapshotRef)
return this.exportedVm.uuid
})
} else {
this.exportedVm = vm
this.timestamp = Date.now()
}
}
async _copyDelta() {
const { exportedVm } = this
const baseVm = this._baseVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportDeltaVm(exportedVm, baseVm, {
fullVdisRequired,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
sizeContainers,
timestamp,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _copyFull() {
const { compression } = this.job
const stream = this._throttleStream(
await this._xapi.VM_export(this.exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this.vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this.vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this.vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const { tags } = this.vm
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
return
}
await this._callWriters(writer => writer.healthCheck(this._healthCheckSr), 'writer.healthCheck()')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
if (this._isDelta) {
await this._selectBaseVm()
}
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const { vm } = this
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
await (this._isDelta ? this._copyDelta() : this._copyFull())
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
exports.VmBackup = VmBackup
decorateMethodsWith(VmBackup, {
run: defer,
})

View File

@@ -13,10 +13,10 @@ const { createDebounceResource } = require('@vates/disposable/debounceResource.j
const { decorateMethodsWith } = require('@vates/decorate-with')
const { deduped } = require('@vates/disposable/deduped.js')
const { getHandler } = require('@xen-orchestra/fs')
const { createRunner } = require('./Backup.js')
const { parseDuration } = require('@vates/parse-duration')
const { Xapi } = require('@xen-orchestra/xapi')
const { Backup } = require('./Backup.js')
const { RemoteAdapter } = require('./RemoteAdapter.js')
const { Task } = require('./Task.js')
@@ -48,7 +48,7 @@ class BackupWorker {
}
run() {
return new Backup({
return createRunner({
config: this.#config,
getAdapter: remoteId => this.getAdapter(this.#remotes[remoteId]),
getConnectedRecord: Disposable.factory(async function* getConnectedRecord(type, uuid) {

View File

@@ -3,7 +3,6 @@
const { beforeEach, afterEach, test, describe } = require('test')
const assert = require('assert').strict
const rimraf = require('rimraf')
const tmp = require('tmp')
const fs = require('fs-extra')
const uuid = require('uuid')
@@ -14,6 +13,7 @@ const { VHDFOOTER, VHDHEADER } = require('./tests.fixtures.js')
const { VhdFile, Constants, VhdDirectory, VhdAbstract } = require('vhd-lib')
const { checkAliases } = require('./_cleanVm')
const { dirname, basename } = require('path')
const { rimraf } = require('rimraf')
let tempDir, adapter, handler, jobId, vdiId, basePath, relativePath
const rootPath = 'xo-vm-backups/VMUUID/'

View File

@@ -33,7 +33,7 @@ const resolveUuid = async (xapi, cache, uuid, type) => {
return ref
}
exports.exportDeltaVm = async function exportDeltaVm(
exports.exportIncrementalVm = async function exportIncrementalVm(
vm,
baseVm,
{
@@ -143,18 +143,18 @@ exports.exportDeltaVm = async function exportDeltaVm(
)
}
exports.importDeltaVm = defer(async function importDeltaVm(
exports.importIncrementalVm = defer(async function importIncrementalVm(
$defer,
deltaVm,
incrementalVm,
sr,
{ cancelToken = CancelToken.none, detectBase = true, mapVdisSrs = {}, newMacAddresses = false } = {}
) {
const { version } = deltaVm
const { version } = incrementalVm
if (compareVersions(version, '1.0.0') < 0) {
throw new Error(`Unsupported delta backup version: ${version}`)
}
const vmRecord = deltaVm.vm
const vmRecord = incrementalVm.vm
const xapi = sr.$xapi
let baseVm
@@ -183,7 +183,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
baseVdis[vbd.VDI] = vbd.$VDI
}
})
const vdiRecords = deltaVm.vdis
const vdiRecords = incrementalVm.vdis
// 0. Create suspend_VDI
let suspendVdi
@@ -240,7 +240,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
await asyncMap(await xapi.getField('VM', vmRef, 'VBDs'), ref => ignoreErrors.call(xapi.call('VBD.destroy', ref)))
// 3. Create VDIs & VBDs.
const vbdRecords = deltaVm.vbds
const vbdRecords = incrementalVm.vbds
const vbds = groupBy(vbdRecords, 'VDI')
const newVdis = {}
await asyncMap(Object.keys(vdiRecords), async vdiRef => {
@@ -309,7 +309,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}
})
const { streams } = deltaVm
const { streams } = incrementalVm
await Promise.all([
// Import VDI contents.
@@ -326,7 +326,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}),
// Create VIFs.
asyncMap(Object.values(deltaVm.vifs), vif => {
asyncMap(Object.values(incrementalVm.vifs), vif => {
let network = vif.$network$uuid && xapi.getObjectByUuid(vif.$network$uuid, undefined)
if (network === undefined) {
@@ -358,8 +358,8 @@ exports.importDeltaVm = defer(async function importDeltaVm(
])
await Promise.all([
deltaVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', deltaVm.vm.name_label),
incrementalVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', incrementalVm.vm.name_label),
])
return vmRef

View File

@@ -0,0 +1,134 @@
'use strict'
const { asyncMap } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
const { XoMetadataBackup } = require('./_XoMetadataBackup.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
exports.Metadata = class MetadataBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
}

View File

@@ -0,0 +1,98 @@
'use strict'
const { asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { Task } = require('../Task.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const { FullRemote } = require('./_vmRunners/FullRemote.js')
const { IncrementalRemote } = require('./_vmRunners/IncrementalRemote.js')
const DEFAULT_REMOTE_VM_SETTINGS = {
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
timeout: 0,
validateVhdStreams: false,
vmTimeout: 0,
}
exports.VmsRemote = class RemoteVmsBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_REMOTE_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
() => this._getAdapter(job.sourceRemote),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
Disposable.all(
extractIdsFromSimplePattern(job.remotes).map(id => id !== job.sourceRemote && this._getAdapter(id))
),
async ({ adapter: sourceRemoteAdapter }, healthCheckSr, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => !!_)
if (remoteAdapters.length === 0) {
return
}
const vmsUuids = await sourceRemoteAdapter.listAllVms()
Task.info('vms', { vms: vmsUuids })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
const opts = {
baseSettings,
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vmUuid] },
sourceRemoteAdapter,
throttleStream,
vmUuid,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalRemote(opts)
} else if (job.mode === 'full') {
vmBackup = new FullRemote(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented for mirror backup`)
}
return runTask(taskStart, () => vmBackup.run())
}
const { concurrency } = settings
await asyncMapSettled(vmsUuids, !concurrency ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,138 @@
'use strict'
const { asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { Task } = require('../Task.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const { IncrementalXapi } = require('./_vmRunners/IncrementalXapi.js')
const { FullXapi } = require('./_vmRunners/FullXapi.js')
const DEFAULT_XAPI_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
exports.VmsXapi = class VmsXapiBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_XAPI_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () => {
const opts = {
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalXapi(opts)
} else {
if (job.mode === 'full') {
vmBackup = new FullXapi(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented`)
}
}
return vmBackup.run()
})
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,51 @@
'use strict'
const Disposable = require('promise-toolbox/Disposable')
const pTimeout = require('promise-toolbox/timeout')
const { compileTemplate } = require('@xen-orchestra/template')
const { runTask } = require('./_runTask.js')
const { RemoteTimeoutError } = require('./_RemoteTimeoutError.js')
exports.DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
exports.Abstract = class AbstractRunner {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const baseSettings = this._computeBaseSettings(config, job)
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
}

View File

@@ -2,10 +2,10 @@
const { asyncMap } = require('@xen-orchestra/async-map')
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { DIR_XO_POOL_METADATA_BACKUPS } = require('../RemoteAdapter.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
const { formatFilenameDate } = require('../_filenameDate.js')
const { Task } = require('../Task.js')
const PATH_DB_DUMP = '/pool/xmldbdump'
exports.PATH_DB_DUMP = PATH_DB_DUMP

View File

@@ -0,0 +1,8 @@
'use strict'
class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}
exports.RemoteTimeoutError = RemoteTimeoutError

View File

@@ -2,9 +2,9 @@
const { asyncMap } = require('@xen-orchestra/async-map')
const { DIR_XO_CONFIG_BACKUPS } = require('./RemoteAdapter.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
const { DIR_XO_CONFIG_BACKUPS } = require('../RemoteAdapter.js')
const { formatFilenameDate } = require('../_filenameDate.js')
const { Task } = require('../Task.js')
exports.XoMetadataBackup = class XoMetadataBackup {
constructor({ config, job, remoteAdapters, schedule, settings }) {

View File

@@ -0,0 +1,9 @@
'use strict'
const getAdaptersByRemote = adapters => {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}
exports.getAdaptersByRemote = getAdaptersByRemote

View File

@@ -0,0 +1,6 @@
'use strict'
const { Task } = require('../Task.js')
const noop = Function.prototype
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
exports.runTask = runTask

View File

@@ -0,0 +1,53 @@
'use strict'
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { AbstractRemote } = require('./_AbstractRemote')
const { FullRemoteWriter } = require('../_writers/FullRemoteWriter')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe')
const { watchStreamSize } = require('../../_watchStreamSize')
const { Task } = require('../../Task')
class FullRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return FullRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'full')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
const stream = await this._sourceRemoteAdapter.readFullVmBackup(metadata)
const sizeContainer = watchStreamSize(stream)
// @todo shouldn't transfer backup if it will be deleted by retention policy (higher retention on source than destination)
await this._callWriters(
writer =>
writer.run({
stream: forkStreamUnpipe(stream),
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
sizeContainer,
}),
'writer.run()'
)
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
exports.FullRemote = FullRemoteVmBackupRunner
decorateMethodsWith(FullRemoteVmBackupRunner, {
_run: defer,
})

View File

@@ -0,0 +1,65 @@
'use strict'
const { createLogger } = require('@xen-orchestra/log')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe.js')
const { FullRemoteWriter } = require('../_writers/FullRemoteWriter.js')
const { FullXapiWriter } = require('../_writers/FullXapiWriter.js')
const { watchStreamSize } = require('../../_watchStreamSize.js')
const { AbstractXapi } = require('./_AbstractXapi.js')
const { debug } = createLogger('xo:backups:FullXapiVmBackup')
exports.FullXapi = class FullXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [FullRemoteWriter, FullXapiWriter]
}
_mustDoSnapshot() {
const vm = this._vm
const settings = this._settings
return (
settings.unconditionalSnapshot ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
)
}
_selectBaseVm() {}
async _copy() {
const { compression } = this.job
const vm = this._vm
const exportedVm = this._exportedVm
const stream = this._throttleStream(
await this._xapi.VM_export(exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
}

View File

@@ -0,0 +1,67 @@
'use strict'
const assert = require('node:assert')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { mapValues } = require('lodash')
const { Task } = require('../../Task')
const { AbstractRemote } = require('./_AbstractRemote')
const { IncrementalRemoteWriter } = require('../_writers/IncrementalRemoteWriter')
const { forkDeltaExport } = require('./_forkDeltaExport')
const isVhdDifferencingDisk = require('vhd-lib/isVhdDifferencingDisk')
const { asyncEach } = require('@vates/async-each')
class IncrementalRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return IncrementalRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'delta')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
assert.strictEqual(metadata.mode, 'delta')
await this._callWriters(writer => writer.prepare({ isBase: metadata.isBase }), 'writer.prepare()')
const incrementalExport = await this._sourceRemoteAdapter.readIncrementalVmBackup(metadata, undefined, {
useChain: false,
})
const differentialVhds = {}
await asyncEach(Object.entries(incrementalExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
incrementalExport.streams = mapValues(incrementalExport.streams, this._throttleStream)
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(incrementalExport),
differentialVhds,
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
}),
'writer.transfer()'
)
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
exports.IncrementalRemote = IncrementalRemoteVmBackupRunner
decorateMethodsWith(IncrementalRemoteVmBackupRunner, {
_run: defer,
})

View File

@@ -0,0 +1,175 @@
'use strict'
const findLast = require('lodash/findLast.js')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const vhdStreamValidator = require('vhd-lib/vhdStreamValidator.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { pipeline } = require('node:stream')
const { IncrementalRemoteWriter } = require('../_writers/IncrementalRemoteWriter.js')
const { IncrementalXapiWriter } = require('../_writers/IncrementalXapiWriter.js')
const { exportIncrementalVm } = require('../../_incrementalVm.js')
const { Task } = require('../../Task.js')
const { watchStreamSize } = require('../../_watchStreamSize.js')
const { AbstractXapi } = require('./_AbstractXapi.js')
const { forkDeltaExport } = require('./_forkDeltaExport.js')
const isVhdDifferencingDisk = require('vhd-lib/isVhdDifferencingDisk')
const { asyncEach } = require('@vates/async-each')
const { debug } = createLogger('xo:backups:IncrementalXapiVmBackup')
const noop = Function.prototype
exports.IncrementalXapi = class IncrementalXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [IncrementalRemoteWriter, IncrementalXapiWriter]
}
_mustDoSnapshot() {
return true
}
async _copy() {
const baseVm = this._baseVm
const vm = this._vm
const exportedVm = this._exportedVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportIncrementalVm(exportedVm, baseVm, {
fullVdisRequired,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const differentialVhds = {}
// since isVhdDifferencingDisk is reading and unshifting data in stream
// it should be done BEFORE any other stream transform
await asyncEach(Object.entries(deltaExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
differentialVhds,
sizeContainers,
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this._vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
}

View File

@@ -0,0 +1,95 @@
'use strict'
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { Task } = require('../../Task.js')
const { debug, warn } = createLogger('xo:backups:AbstractVmRunner')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
exports.Abstract = class AbstractVmBackupRunner {
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const tags = this._tags
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
// create a task to have an info in the logs and reports
return Task.run(
{
name: 'health check',
},
() => {
Task.info(`This VM doesn't match the health check's tags for this schedule`)
}
)
}
await this._callWriters(writer => writer.healthCheck(), 'writer.healthCheck()')
}
}

View File

@@ -0,0 +1,97 @@
'use strict'
const { Abstract } = require('./_Abstract')
const { getVmBackupDir } = require('../../_getVmBackupDir')
const { asyncEach } = require('@vates/async-each')
const { Disposable } = require('promise-toolbox')
exports.AbstractRemote = class AbstractRemoteVmBackupRunner extends Abstract {
constructor({
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings,
sourceRemoteAdapter,
throttleStream,
vmUuid,
}) {
super()
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
this._healthCheckSr = healthCheckSr
this._sourceRemoteAdapter = sourceRemoteAdapter
this._throttleStream = throttleStream
this._vmUuid = vmUuid
const allSettings = job.settings
const writers = new Set()
this._writers = writers
const RemoteWriter = this._getRemoteWriter()
Object.entries(remoteAdapters).forEach(([remoteId, adapter]) => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
writers.add(
new RemoteWriter({
adapter,
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid,
remoteId,
settings: targetSettings,
})
)
})
}
async _computeTransferList(predicate) {
const vmBackups = await this._sourceRemoteAdapter.listVmBackups(this._vmUuid, predicate)
const localMetada = new Map()
Object.values(vmBackups).forEach(metadata => {
const timestamp = metadata.timestamp
localMetada.set(timestamp, metadata)
})
const nbRemotes = Object.keys(this.remoteAdapters).length
const remoteMetadatas = {}
await asyncEach(Object.values(this.remoteAdapters), async remoteAdapter => {
const remoteMetadata = await remoteAdapter.listVmBackups(this._vmUuid, predicate)
remoteMetadata.forEach(metadata => {
const timestamp = metadata.timestamp
remoteMetadatas[timestamp] = (remoteMetadatas[timestamp] ?? 0) + 1
})
})
let chain = []
const timestamps = [...localMetada.keys()]
timestamps.sort()
for (const timestamp of timestamps) {
if (remoteMetadatas[timestamp] !== nbRemotes) {
// this backup is not present in all the remote
// should be retransfered if not found later
chain.push(localMetada.get(timestamp))
} else {
// backup is present in local and remote : the chain has already been transferred
chain = []
}
}
return chain
}
async run() {
const handler = this._sourceRemoteAdapter._handler
await Disposable.use(await handler.lock(getVmBackupDir(this._vmUuid)), async () => {
await this._run()
await this._healthCheck()
})
}
}

View File

@@ -0,0 +1,278 @@
'use strict'
const assert = require('assert')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap } = require('@xen-orchestra/async-map')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { Abstract } = require('./_Abstract.js')
class AbstractXapiVmBackupRunner extends Abstract {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
super()
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
const tags = (this._tags = vm.tags)
// VM (snapshot) that is really exported
this._exportedVm = undefined
this._vm = vm
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isIncremental = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._getWriters()
const allSettings = job.settings
Object.entries(remoteAdapters).forEach(([remoteId, adapter]) => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(
new BackupWriter({
adapter,
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid: vm.uuid,
remoteId,
settings: targetSettings,
})
)
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(
new ReplicationWriter({
config,
healthCheckSr,
job,
scheduleId: schedule.id,
vmUuid: vm.uuid,
sr,
settings: targetSettings,
})
)
}
})
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const vm = this._vm
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const vm = this._vm
const xapi = this._xapi
const settings = this._settings
if (this._mustDoSnapshot()) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this._exportedVm = await xapi.getRecord('VM', snapshotRef)
return this._exportedVm.uuid
})
} else {
this._exportedVm = vm
this.timestamp = Date.now()
}
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this._vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this._vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async copy() {
throw new Error('Not implemented')
}
_getWriters() {
throw new Error('Not implemented')
}
_mustDoSnapshot() {
throw new Error('Not implemented')
}
async _selectBaseVm() {
throw new Error('Not implemented')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
await this._selectBaseVm()
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const vm = this._vm
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
await this._copy()
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
exports.AbstractXapi = AbstractXapiVmBackupRunner
decorateMethodsWith(AbstractXapiVmBackupRunner, {
run: defer,
})

View File

@@ -0,0 +1,12 @@
'use strict'
const { mapValues } = require('lodash')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe')
exports.forkDeltaExport = function forkDeltaExport(deltaExport) {
return Object.create(deltaExport, {
streams: {
value: mapValues(deltaExport.streams, forkStreamUnpipe),
},
})
}

View File

@@ -1,13 +1,13 @@
'use strict'
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { MixinRemoteWriter } = require('./_MixinRemoteWriter.js')
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(AbstractFullWriter) {
exports.FullRemoteWriter = class FullRemoteWriter extends MixinRemoteWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -26,15 +26,17 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
)
}
async _run({ timestamp, sizeContainer, stream }) {
const backup = this._backup
async _run({ timestamp, sizeContainer, stream, vm, vmSnapshot }) {
const settings = this._settings
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const adapter = this._adapter
// TODO: clean VM backup directory
let metadata = await this._isAlreadyTransferred(timestamp)
if (metadata !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const oldBackups = getOldEntries(
settings.exportRetention - 1,
@@ -47,14 +49,14 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
const dataBasename = basename + '.xva'
const dataFilename = this._vmBackupDir + '/' + dataBasename
const metadata = {
metadata = {
jobId: job.id,
mode: job.mode,
scheduleId,
timestamp,
version: '2.0.0',
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
xva: './' + dataBasename,
}

View File

@@ -4,15 +4,15 @@ const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
const { MixinReplicationWriter } = require('./_MixinReplicationWriter.js')
const { MixinXapiWriter } = require('./_MixinXapiWriter.js')
const { listReplicatedVms } = require('./_listReplicatedVms.js')
exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplicationWriter(AbstractFullWriter) {
exports.FullXapiWriter = class FullXapiWriter extends MixinXapiWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -32,10 +32,11 @@ exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplica
)
}
async _run({ timestamp, sizeContainer, stream }) {
async _run({ timestamp, sizeContainer, stream, vm }) {
const sr = this._sr
const settings = this._settings
const { job, scheduleId, vm } = this._backup
const job = this._job
const scheduleId = this.scheduleId
const { uuid: srUuid, $xapi: xapi } = sr

View File

@@ -11,25 +11,24 @@ const { decorateClass } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { dirname } = require('path')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { MixinRemoteWriter } = require('./_MixinRemoteWriter.js')
const { AbstractIncrementalWriter } = require('./_AbstractIncrementalWriter.js')
const { checkVhd } = require('./_checkVhd.js')
const { packUuid } = require('./_packUuid.js')
const { Disposable } = require('promise-toolbox')
const { warn } = createLogger('xo:backups:DeltaBackupWriter')
class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi) {
const { handler } = this._adapter
const backup = this._backup
const adapter = this._adapter
const vdisDir = `${this._vmBackupDir}/vdis/${backup.job.id}`
const vdisDir = `${this._vmBackupDir}/vdis/${this._job.id}`
await asyncMap(baseUuidToSrcVdi, async ([baseUuid, srcVdi]) => {
let found = false
@@ -91,11 +90,12 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
async _prepare() {
const adapter = this._adapter
const settings = this._settings
const { scheduleId, vm } = this._backup
const scheduleId = this._scheduleId
const vmUuid = this._vmUuid
const oldEntries = getOldEntries(
settings.exportRetention - 1,
await adapter.listVmBackups(vm.uuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
await adapter.listVmBackups(vmUuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
)
this._oldEntries = oldEntries
@@ -134,16 +134,19 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}
}
async _transfer($defer, { timestamp, deltaExport }) {
async _transfer($defer, { differentialVhds, timestamp, deltaExport, vm, vmSnapshot }) {
const adapter = this._adapter
const backup = this._backup
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const jobId = job.id
const handler = adapter.handler
// TODO: clean VM backup directory
let metadataContent = await this._isAlreadyTransferred(timestamp)
if (metadataContent !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const basename = formatFilenameDate(timestamp)
const vhds = mapValues(
@@ -158,7 +161,7 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}/${adapter.getVhdFileName(basename)}`
)
const metadataContent = {
metadataContent = {
jobId,
mode: job.mode,
scheduleId,
@@ -169,16 +172,15 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
vifs: deltaExport.vifs,
vhds,
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
}
const { size } = await Task.run({ name: 'transfer' }, async () => {
let transferSize = 0
await Promise.all(
map(deltaExport.vdis, async (vdi, id) => {
const path = `${this._vmBackupDir}/${vhds[id]}`
const isDelta = vdi.other_config['xo:base_delta'] !== undefined
const isDelta = differentialVhds[`${id}.vhd`]
let parentPath
if (isDelta) {
const vdiDir = dirname(path)
@@ -191,7 +193,11 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
.sort()
.pop()
assert.notStrictEqual(parentPath, undefined, `missing parent of ${id}`)
assert.notStrictEqual(
parentPath,
undefined,
`missing parent of ${id} in ${dirname(path)}, looking for ${vdi.other_config['xo:base_delta']}`
)
parentPath = parentPath.slice(1) // remove leading slash
@@ -204,7 +210,8 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// merges and chainings
checksum: false,
validator: tmpPath => checkVhd(handler, tmpPath),
writeBlockConcurrency: this._backup.config.writeBlockConcurrency,
writeBlockConcurrency: this._config.writeBlockConcurrency,
isDelta,
})
if (isDelta) {
@@ -227,6 +234,6 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// TODO: run cleanup?
}
}
exports.DeltaBackupWriter = decorateClass(DeltaBackupWriter, {
exports.IncrementalRemoteWriter = decorateClass(IncrementalRemoteWriter, {
_transfer: defer,
})

View File

@@ -4,19 +4,19 @@ const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { importDeltaVm, TAG_COPY_SRC } = require('../_deltaVm.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { importIncrementalVm, TAG_COPY_SRC } = require('../../_incrementalVm.js')
const { Task } = require('../../Task.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { MixinReplicationWriter } = require('./_MixinReplicationWriter.js')
const { AbstractIncrementalWriter } = require('./_AbstractIncrementalWriter.js')
const { MixinXapiWriter } = require('./_MixinXapiWriter.js')
const { listReplicatedVms } = require('./_listReplicatedVms.js')
exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinReplicationWriter(AbstractDeltaWriter) {
exports.IncrementalXapiWriter = class IncrementalXapiWriter extends MixinXapiWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi, baseVm) {
const sr = this._sr
const replicatedVm = listReplicatedVms(sr.$xapi, this._backup.job.id, sr.uuid, this._backup.vm.uuid).find(
const replicatedVm = listReplicatedVms(sr.$xapi, this._job.id, sr.uuid, this._vmUuid).find(
vm => vm.other_config[TAG_COPY_SRC] === baseVm.uuid
)
if (replicatedVm === undefined) {
@@ -49,9 +49,10 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
type: 'SR',
},
})
const hasHealthCheckSr = this._healthCheckSr !== undefined
this.transfer = task.wrapFn(this.transfer)
this.healthCheck = task.wrapFn(this.healthCheck)
this.cleanup = task.wrapFn(this.cleanup, true)
this.cleanup = task.wrapFn(this.cleanup, !hasHealthCheckSr)
this.healthCheck = task.wrapFn(this.healthCheck, hasHealthCheckSr)
return task.run(() => this._prepare())
}
@@ -59,12 +60,13 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
async _prepare() {
const settings = this._settings
const { uuid: srUuid, $xapi: xapi } = this._sr
const { scheduleId, vm } = this._backup
const vmUuid = this._vmUuid
const scheduleId = this._scheduleId
// delete previous interrupted copies
ignoreErrors.call(asyncMapSettled(listReplicatedVms(xapi, scheduleId, undefined, vm.uuid), vm => vm.$destroy))
ignoreErrors.call(asyncMapSettled(listReplicatedVms(xapi, scheduleId, undefined, vmUuid), vm => vm.$destroy))
this._oldEntries = getOldEntries(settings.copyRetention - 1, listReplicatedVms(xapi, scheduleId, srUuid, vm.uuid))
this._oldEntries = getOldEntries(settings.copyRetention - 1, listReplicatedVms(xapi, scheduleId, srUuid, vmUuid))
if (settings.deleteFirst) {
await this._deleteOldEntries()
@@ -81,16 +83,17 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
return asyncMapSettled(this._oldEntries, vm => vm.$destroy())
}
async _transfer({ timestamp, deltaExport, sizeContainers }) {
async _transfer({ timestamp, deltaExport, sizeContainers, vm }) {
const { _warmMigration } = this._settings
const sr = this._sr
const { job, scheduleId, vm } = this._backup
const job = this._job
const scheduleId = this._scheduleId
const { uuid: srUuid, $xapi: xapi } = sr
let targetVmRef
await Task.run({ name: 'transfer' }, async () => {
targetVmRef = await importDeltaVm(
targetVmRef = await importIncrementalVm(
{
__proto__: deltaExport,
vm: {

View File

@@ -3,9 +3,9 @@
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractFullWriter = class AbstractFullWriter extends AbstractWriter {
async run({ timestamp, sizeContainer, stream }) {
async run({ timestamp, sizeContainer, stream, vm, vmSnapshot }) {
try {
return await this._run({ timestamp, sizeContainer, stream })
return await this._run({ timestamp, sizeContainer, stream, vm, vmSnapshot })
} finally {
// ensure stream is properly closed
stream.destroy()

View File

@@ -2,7 +2,7 @@
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
exports.AbstractIncrementalWriter = class AbstractIncrementalWriter extends AbstractWriter {
checkBaseVdis(baseUuidToSrcVdi, baseVm) {
throw new Error('Not implemented')
}
@@ -15,9 +15,9 @@ exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
throw new Error('Not implemented')
}
async transfer({ timestamp, deltaExport, sizeContainers }) {
async transfer({ deltaExport, ...other }) {
try {
return await this._transfer({ timestamp, deltaExport, sizeContainers })
return await this._transfer({ deltaExport, ...other })
} finally {
// ensure all streams are properly closed
for (const stream of Object.values(deltaExport.streams)) {

View File

@@ -0,0 +1,31 @@
'use strict'
const { formatFilenameDate } = require('../../_filenameDate')
const { getVmBackupDir } = require('../../_getVmBackupDir')
exports.AbstractWriter = class AbstractWriter {
constructor({ config, healthCheckSr, job, vmUuid, scheduleId, settings }) {
this._config = config
this._healthCheckSr = healthCheckSr
this._job = job
this._scheduleId = scheduleId
this._settings = settings
this._vmUuid = vmUuid
}
beforeBackup() {}
afterBackup() {}
healthCheck(sr) {}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`))
return actualMetadata
} catch (error) {}
}
}

View File

@@ -4,26 +4,26 @@ const { createLogger } = require('@xen-orchestra/log')
const { join } = require('path')
const assert = require('assert')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup.js')
const { ImportVmBackup } = require('../ImportVmBackup.js')
const { Task } = require('../Task.js')
const MergeWorker = require('../merge-worker/index.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getVmBackupDir } = require('../../_getVmBackupDir.js')
const { HealthCheckVmBackup } = require('../../HealthCheckVmBackup.js')
const { ImportVmBackup } = require('../../ImportVmBackup.js')
const { Task } = require('../../Task.js')
const MergeWorker = require('../../merge-worker/index.js')
const { info, warn } = createLogger('xo:backups:MixinBackupWriter')
exports.MixinBackupWriter = (BaseClass = Object) =>
class MixinBackupWriter extends BaseClass {
exports.MixinRemoteWriter = (BaseClass = Object) =>
class MixinRemoteWriter extends BaseClass {
#lock
constructor({ remoteId, ...rest }) {
constructor({ remoteId, adapter, ...rest }) {
super(rest)
this._adapter = rest.backup.remoteAdapters[remoteId]
this._adapter = adapter
this._remoteId = remoteId
this._vmBackupDir = getVmBackupDir(this._backup.vm.uuid)
this._vmBackupDir = getVmBackupDir(rest.vmUuid)
}
async _cleanVm(options) {
@@ -38,7 +38,7 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
Task.warning(message, data)
},
lock: false,
mergeBlockConcurrency: this._backup.config.mergeBlockConcurrency,
mergeBlockConcurrency: this._config.mergeBlockConcurrency,
})
})
} catch (error) {
@@ -55,10 +55,10 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
async afterBackup() {
const { disableMergeWorker } = this._backup.config
const { disableMergeWorker } = this._config
// merge worker only compatible with local remotes
const { handler } = this._adapter
const willMergeInWorker = !disableMergeWorker && typeof handler._getRealPath === 'function'
const willMergeInWorker = !disableMergeWorker && typeof handler.getRealPath === 'function'
const { merge } = await this._cleanVm({ remove: true, merge: !willMergeInWorker })
await this.#lock.dispose()
@@ -70,13 +70,15 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
// add a random suffix to avoid collision in case multiple tasks are created at the same second
Math.random().toString(36).slice(2)
await handler.outputFile(taskFile, this._backup.vm.uuid)
const remotePath = handler._getRealPath()
await handler.outputFile(taskFile, this._vmUuid)
const remotePath = handler.getRealPath()
await MergeWorker.run(remotePath)
}
}
healthCheck(sr) {
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notStrictEqual(
this._metadataFileName,
undefined,
@@ -109,4 +111,16 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
)
}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(
adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`)
)
return actualMetadata
} catch (error) {}
}
}

View File

@@ -1,26 +1,22 @@
'use strict'
const { Task } = require('../Task')
const assert = require('node:assert/strict')
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup')
const { extractOpaqueRef } = require('@xen-orchestra/xapi')
function extractOpaqueRef(str) {
const OPAQUE_REF_RE = /OpaqueRef:[0-9a-z-]+/
const matches = OPAQUE_REF_RE.exec(str)
if (!matches) {
throw new Error('no opaque ref found')
}
return matches[0]
}
exports.MixinReplicationWriter = (BaseClass = Object) =>
class MixinReplicationWriter extends BaseClass {
const { Task } = require('../../Task')
const assert = require('node:assert/strict')
const { HealthCheckVmBackup } = require('../../HealthCheckVmBackup')
exports.MixinXapiWriter = (BaseClass = Object) =>
class MixinXapiWriter extends BaseClass {
constructor({ sr, ...rest }) {
super(rest)
this._sr = sr
}
healthCheck(sr) {
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notEqual(this._targetVmRef, undefined, 'A vm should have been transfered to be health checked')
// copy VM
return Task.run(

View File

@@ -228,7 +228,7 @@ Settings are described in [`@xen-orchestra/backups/Backup.js](https://github.com
- `prepare({ isFull })`
- `transfer({ timestamp, deltaExport, sizeContainers })`
- `cleanup()`
- `healthCheck(sr)`
- `healthCheck()` // is not executed if no health check sr or tag doesn't match
- **Full**
- `run({ timestamp, sizeContainer, stream })`
- `afterBackup()`

View File

@@ -8,13 +8,13 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "0.36.0",
"version": "0.38.2",
"engines": {
"node": ">=14.6"
},
"scripts": {
"postversion": "npm publish --access public",
"test": "node--test"
"test-integration": "node--test *.integ.js"
},
"dependencies": {
"@kldzj/stream-throttle": "^1.1.1",
@@ -27,7 +27,7 @@
"@vates/nbd-client": "^1.2.0",
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^3.3.4",
"@xen-orchestra/fs": "^4.0.0",
"@xen-orchestra/log": "^0.6.0",
"@xen-orchestra/template": "^0.1.0",
"compare-versions": "^5.0.1",
@@ -42,17 +42,17 @@
"promise-toolbox": "^0.21.0",
"proper-lockfile": "^4.1.2",
"uuid": "^9.0.0",
"vhd-lib": "^4.4.0",
"vhd-lib": "^4.5.0",
"yazl": "^2.5.1"
},
"devDependencies": {
"rimraf": "^4.1.1",
"rimraf": "^5.0.1",
"sinon": "^15.0.1",
"test": "^3.2.1",
"tmp": "^0.2.1"
},
"peerDependencies": {
"@xen-orchestra/xapi": "^2.2.0"
"@xen-orchestra/xapi": "^2.2.1"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -1,14 +0,0 @@
'use strict'
exports.AbstractWriter = class AbstractWriter {
constructor({ backup, settings }) {
this._backup = backup
this._settings = settings
}
beforeBackup() {}
afterBackup() {}
healthCheck(sr) {}
}

View File

@@ -18,7 +18,7 @@
"preferGlobal": true,
"dependencies": {
"golike-defer": "^0.5.1",
"xen-api": "^1.3.0"
"xen-api": "^1.3.1"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/fs",
"version": "3.3.4",
"version": "4.0.0",
"license": "AGPL-3.0-or-later",
"description": "The File System for Xen Orchestra backups.",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",
@@ -53,7 +53,9 @@
"@babel/preset-env": "^7.8.0",
"cross-env": "^7.0.2",
"dotenv": "^16.0.0",
"rimraf": "^4.1.1",
"rimraf": "^5.0.1",
"sinon": "^15.0.4",
"test": "^3.3.0",
"tmp": "^0.2.1"
},
"scripts": {
@@ -63,7 +65,9 @@
"prebuild": "yarn run clean",
"predev": "yarn run clean",
"prepublishOnly": "yarn run build",
"postversion": "npm publish"
"pretest": "yarn run build",
"postversion": "npm publish",
"test": "node--test ./dist/"
},
"author": {
"name": "Vates SAS",

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'readable-stream'
import copyStreamToBuffer from './_copyStreamToBuffer.js'
@@ -16,6 +17,6 @@ describe('copyStreamToBuffer', () => {
await copyStreamToBuffer(stream, buffer)
expect(buffer.toString()).toBe('hel')
assert.equal(buffer.toString(), 'hel')
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'readable-stream'
import createBufferFromStream from './_createBufferFromStream.js'
@@ -14,6 +15,6 @@ describe('createBufferFromStream', () => {
const buffer = await createBufferFromStream(stream)
expect(buffer.toString()).toBe('hello')
assert.equal(buffer.toString(), 'hello')
})
})

View File

@@ -1,4 +1,6 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'node:stream'
import { _getEncryptor } from './_encryptor'
import crypto from 'crypto'
@@ -25,13 +27,13 @@ algorithms.forEach(algorithm => {
it('handle buffer', () => {
const encrypted = encryptor.encryptData(buffer)
if (algorithm !== 'none') {
expect(encrypted.equals(buffer)).toEqual(false) // encrypted should be different
assert.equal(encrypted.equals(buffer), false) // encrypted should be different
// ivlength, auth tag, padding
expect(encrypted.length).not.toEqual(buffer.length)
assert.notEqual(encrypted.length, buffer.length)
}
const decrypted = encryptor.decryptData(encrypted)
expect(decrypted.equals(buffer)).toEqual(true)
assert.equal(decrypted.equals(buffer), true)
})
it('handle stream', async () => {
@@ -39,12 +41,12 @@ algorithms.forEach(algorithm => {
stream.length = buffer.length
const encrypted = encryptor.encryptStream(stream)
if (algorithm !== 'none') {
expect(encrypted.length).toEqual(undefined)
assert.equal(encrypted.length, undefined)
}
const decrypted = encryptor.decryptStream(encrypted)
const decryptedBuffer = await streamToBuffer(decrypted)
expect(decryptedBuffer.equals(buffer)).toEqual(true)
assert.equal(decryptedBuffer.equals(buffer), true)
})
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import guessAwsRegion from './_guessAwsRegion.js'
@@ -6,12 +7,12 @@ describe('guessAwsRegion', () => {
it('should return region from AWS URL', async () => {
const region = guessAwsRegion('s3.test-region.amazonaws.com')
expect(region).toBe('test-region')
assert.equal(region, 'test-region')
})
it('should return default region if none is found is AWS URL', async () => {
const region = guessAwsRegion('s3.amazonaws.com')
expect(region).toBe('us-east-1')
assert.equal(region, 'us-east-1')
})
})

View File

@@ -9,28 +9,32 @@ import LocalHandler from './local'
const sudoExeca = (command, args, opts) => execa('sudo', [command, ...args], opts)
export default class MountHandler extends LocalHandler {
#execa
#keeper
#params
#realPath
constructor(remote, { mountsDir = join(tmpdir(), 'xo-fs-mounts'), useSudo = false, ...opts } = {}, params) {
super(remote, opts)
this._execa = useSudo ? sudoExeca : execa
this._keeper = undefined
this._params = {
this.#execa = useSudo ? sudoExeca : execa
this.#params = {
...params,
options: [params.options, remote.options ?? params.defaultOptions].filter(_ => _ !== undefined).join(','),
}
this._realPath = join(mountsDir, remote.id || Math.random().toString(36).slice(2))
this.#realPath = join(mountsDir, remote.id || Math.random().toString(36).slice(2))
}
async _forget() {
const keeper = this._keeper
const keeper = this.#keeper
if (keeper === undefined) {
return
}
this._keeper = undefined
this.#keeper = undefined
await fs.close(keeper)
await ignoreErrors.call(
this._execa('umount', [this._getRealPath()], {
this.#execa('umount', [this.getRealPath()], {
env: {
LANG: 'C',
},
@@ -38,30 +42,30 @@ export default class MountHandler extends LocalHandler {
)
}
_getRealPath() {
return this._realPath
getRealPath() {
return this.#realPath
}
async _sync() {
// in case of multiple `sync`s, ensure we properly close previous keeper
{
const keeper = this._keeper
const keeper = this.#keeper
if (keeper !== undefined) {
this._keeper = undefined
this.#keeper = undefined
ignoreErrors.call(fs.close(keeper))
}
}
const realPath = this._getRealPath()
const realPath = this.getRealPath()
await fs.ensureDir(realPath)
try {
const { type, device, options, env } = this._params
const { type, device, options, env } = this.#params
// Linux mount is more flexible in which order the mount arguments appear.
// But FreeBSD requires this order of the arguments.
await this._execa('mount', ['-o', options, '-t', type, device, realPath], {
await this.#execa('mount', ['-o', options, '-t', type, device, realPath], {
env: {
LANG: 'C',
...env,
@@ -71,7 +75,7 @@ export default class MountHandler extends LocalHandler {
try {
// the failure may mean it's already mounted, use `findmnt` to check
// that's the case
await this._execa('findmnt', [realPath], {
await this.#execa('findmnt', [realPath], {
stdio: 'ignore',
})
} catch (_) {
@@ -82,7 +86,7 @@ export default class MountHandler extends LocalHandler {
// keep an open file on the mount to prevent it from being unmounted if used
// by another handler/process
const keeperPath = `${realPath}/.keeper_${Math.random().toString(36).slice(2)}`
this._keeper = await fs.open(keeperPath, 'w')
this.#keeper = await fs.open(keeperPath, 'w')
ignoreErrors.call(fs.unlink(keeperPath))
}
}

View File

@@ -37,8 +37,13 @@ const ignoreEnoent = error => {
const noop = Function.prototype
class PrefixWrapper {
#prefix
constructor(handler, prefix) {
this._prefix = prefix
this.#prefix = prefix
// cannot be a private field because used by methods dynamically added
// outside of the class
this._handler = handler
}
@@ -50,7 +55,7 @@ class PrefixWrapper {
async list(dir, opts) {
const entries = await this._handler.list(this._resolve(dir), opts)
if (opts != null && opts.prependDir) {
const n = this._prefix.length
const n = this.#prefix.length
entries.forEach((entry, i, entries) => {
entries[i] = entry.slice(n)
})
@@ -62,19 +67,21 @@ class PrefixWrapper {
return this._handler.rename(this._resolve(oldPath), this._resolve(newPath))
}
// cannot be a private method because used by methods dynamically added
// outside of the class
_resolve(path) {
return this._prefix + normalizePath(path)
return this.#prefix + normalizePath(path)
}
}
export default class RemoteHandlerAbstract {
#encryptor
#rawEncryptor
get _encryptor() {
if (this.#encryptor === undefined) {
get #encryptor() {
if (this.#rawEncryptor === undefined) {
throw new Error(`Can't access to encryptor before remote synchronization`)
}
return this.#encryptor
return this.#rawEncryptor
}
constructor(remote, options = {}) {
@@ -111,6 +118,10 @@ export default class RemoteHandlerAbstract {
}
// Public members
//
// Should not be called directly because:
// - some concurrency limits may be applied which may lead to deadlocks
// - some preprocessing may be applied on parameters that should not be done multiple times (e.g. prefixing paths)
get type() {
throw new Error('Not implemented')
@@ -121,10 +132,6 @@ export default class RemoteHandlerAbstract {
return prefix === '/' ? this : new PrefixWrapper(this, prefix)
}
async closeFile(fd) {
await this.__closeFile(fd)
}
async createReadStream(file, { checksum = false, ignoreMissingChecksum = false, ...options } = {}) {
if (options.end !== undefined || options.start !== undefined) {
assert.strictEqual(this.isEncrypted, false, `Can't read part of a file when encryption is active ${file}`)
@@ -157,7 +164,7 @@ export default class RemoteHandlerAbstract {
}
if (this.isEncrypted) {
stream = this._encryptor.decryptStream(stream)
stream = this.#encryptor.decryptStream(stream)
} else {
// try to add the length prop if missing and not a range stream
if (stream.length === undefined && options.end === undefined && options.start === undefined) {
@@ -186,7 +193,7 @@ export default class RemoteHandlerAbstract {
path = normalizePath(path)
let checksumStream
input = this._encryptor.encryptStream(input)
input = this.#encryptor.encryptStream(input)
if (checksum) {
checksumStream = createChecksumStream()
pipeline(input, checksumStream, noop)
@@ -224,10 +231,10 @@ export default class RemoteHandlerAbstract {
assert.strictEqual(this.isEncrypted, false, `Can't compute size of an encrypted file ${file}`)
const size = await timeout.call(this._getSize(typeof file === 'string' ? normalizePath(file) : file), this._timeout)
return size - this._encryptor.ivLength
return size - this.#encryptor.ivLength
}
async list(dir, { filter, ignoreMissing = false, prependDir = false } = {}) {
async __list(dir, { filter, ignoreMissing = false, prependDir = false } = {}) {
try {
const virtualDir = normalizePath(dir)
dir = normalizePath(dir)
@@ -257,20 +264,12 @@ export default class RemoteHandlerAbstract {
return { dispose: await this._lock(path) }
}
async mkdir(dir, { mode } = {}) {
await this.__mkdir(normalizePath(dir), { mode })
}
async mktree(dir, { mode } = {}) {
await this._mktree(normalizePath(dir), { mode })
}
openFile(path, flags) {
return this.__openFile(path, flags)
}
async outputFile(file, data, { dirMode, flags = 'wx' } = {}) {
const encryptedData = this._encryptor.encryptData(data)
const encryptedData = this.#encryptor.encryptData(data)
await this._outputFile(normalizePath(file), encryptedData, { dirMode, flags })
}
@@ -279,9 +278,9 @@ export default class RemoteHandlerAbstract {
return this._read(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
}
async readFile(file, { flags = 'r' } = {}) {
async __readFile(file, { flags = 'r' } = {}) {
const data = await this._readFile(normalizePath(file), { flags })
return this._encryptor.decryptData(data)
return this.#encryptor.decryptData(data)
}
async #rename(oldPath, newPath, { checksum }, createTree = true) {
@@ -301,11 +300,11 @@ export default class RemoteHandlerAbstract {
}
}
rename(oldPath, newPath, { checksum = false } = {}) {
__rename(oldPath, newPath, { checksum = false } = {}) {
return this.#rename(normalizePath(oldPath), normalizePath(newPath), { checksum })
}
async copy(oldPath, newPath, { checksum = false } = {}) {
async __copy(oldPath, newPath, { checksum = false } = {}) {
oldPath = normalizePath(oldPath)
newPath = normalizePath(newPath)
@@ -332,33 +331,33 @@ export default class RemoteHandlerAbstract {
async sync() {
await this._sync()
try {
await this._checkMetadata()
await this.#checkMetadata()
} catch (error) {
await this._forget()
throw error
}
}
async _canWriteMetadata() {
const list = await this.list('/', {
async #canWriteMetadata() {
const list = await this.__list('/', {
filter: e => !e.startsWith('.') && e !== ENCRYPTION_DESC_FILENAME && e !== ENCRYPTION_METADATA_FILENAME,
})
return list.length === 0
}
async _createMetadata() {
async #createMetadata() {
const encryptionAlgorithm = this._remote.encryptionKey === undefined ? 'none' : DEFAULT_ENCRYPTION_ALGORITHM
this.#encryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
this.#rawEncryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
await Promise.all([
this._writeFile(normalizePath(ENCRYPTION_DESC_FILENAME), JSON.stringify({ algorithm: encryptionAlgorithm }), {
flags: 'w',
}), // not encrypted
this.writeFile(ENCRYPTION_METADATA_FILENAME, `{"random":"${randomUUID()}"}`, { flags: 'w' }), // encrypted
this.__writeFile(ENCRYPTION_METADATA_FILENAME, `{"random":"${randomUUID()}"}`, { flags: 'w' }), // encrypted
])
}
async _checkMetadata() {
async #checkMetadata() {
let encryptionAlgorithm = 'none'
let data
try {
@@ -374,18 +373,18 @@ export default class RemoteHandlerAbstract {
}
try {
this.#encryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
this.#rawEncryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
// this file is encrypted
const data = await this.readFile(ENCRYPTION_METADATA_FILENAME, 'utf-8')
const data = await this.__readFile(ENCRYPTION_METADATA_FILENAME, 'utf-8')
JSON.parse(data)
} catch (error) {
// can be enoent, bad algorithm, or broeken json ( bad key or algorithm)
if (encryptionAlgorithm !== 'none') {
if (await this._canWriteMetadata()) {
if (await this.#canWriteMetadata()) {
// any other error , but on empty remote => update with remote settings
info('will update metadata of this remote')
return this._createMetadata()
return this.#createMetadata()
} else {
warn(
`The encryptionKey settings of this remote does not match the key used to create it. You won't be able to read any data from this remote`,
@@ -438,7 +437,7 @@ export default class RemoteHandlerAbstract {
await this._truncate(file, len)
}
async unlink(file, { checksum = true } = {}) {
async __unlink(file, { checksum = true } = {}) {
file = normalizePath(file)
if (checksum) {
@@ -453,8 +452,8 @@ export default class RemoteHandlerAbstract {
await this._write(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
}
async writeFile(file, data, { flags = 'wx' } = {}) {
const encryptedData = this._encryptor.encryptData(data)
async __writeFile(file, data, { flags = 'wx' } = {}) {
const encryptedData = this.#encryptor.encryptData(data)
await this._writeFile(normalizePath(file), encryptedData, { flags })
}
@@ -465,6 +464,8 @@ export default class RemoteHandlerAbstract {
}
async __mkdir(dir, { mode } = {}) {
dir = normalizePath(dir)
try {
await this._mkdir(dir, { mode })
} catch (error) {
@@ -586,9 +587,9 @@ export default class RemoteHandlerAbstract {
if (validator !== undefined) {
await validator.call(this, tmpPath)
}
await this.rename(tmpPath, path)
await this.__rename(tmpPath, path)
} catch (error) {
await this.unlink(tmpPath)
await this.__unlink(tmpPath)
throw error
}
}
@@ -665,7 +666,22 @@ export default class RemoteHandlerAbstract {
}
get isEncrypted() {
return this._encryptor.id !== 'NULL_ENCRYPTOR'
return this.#encryptor.id !== 'NULL_ENCRYPTOR'
}
}
// from implementation methods, which names start with `__`, create public
// accessors on which external behaviors can be added (e.g. concurrency limits, path rewriting)
{
const proto = RemoteHandlerAbstract.prototype
for (const method of Object.getOwnPropertyNames(proto)) {
if (method.startsWith('__')) {
const publicName = method.slice(2)
assert(!Object.hasOwn(proto, publicName))
Object.defineProperty(proto, publicName, Object.getOwnPropertyDescriptor(proto, method))
}
}
}

View File

@@ -1,11 +1,13 @@
/* eslint-env jest */
import { after, beforeEach, describe, it } from 'test'
import { strict as assert } from 'assert'
import sinon from 'sinon'
import { DEFAULT_ENCRYPTION_ALGORITHM, _getEncryptor } from './_encryptor'
import { Disposable, pFromCallback, TimeoutError } from 'promise-toolbox'
import { getSyncedHandler } from '.'
import { rimraf } from 'rimraf'
import AbstractHandler from './abstract'
import fs from 'fs-extra'
import rimraf from 'rimraf'
import tmp from 'tmp'
const TIMEOUT = 10e3
@@ -24,7 +26,7 @@ class TestHandler extends AbstractHandler {
const noop = Function.prototype
jest.useFakeTimers()
const clock = sinon.useFakeTimers()
describe('closeFile()', () => {
it(`throws in case of timeout`, async () => {
@@ -33,8 +35,8 @@ describe('closeFile()', () => {
})
const promise = testHandler.closeFile({ fd: undefined, path: '' })
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -45,8 +47,8 @@ describe('getInfo()', () => {
})
const promise = testHandler.getInfo()
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -57,8 +59,8 @@ describe('getSize()', () => {
})
const promise = testHandler.getSize('')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -69,8 +71,8 @@ describe('list()', () => {
})
const promise = testHandler.list('.')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -81,8 +83,8 @@ describe('openFile()', () => {
})
const promise = testHandler.openFile('path')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -93,8 +95,8 @@ describe('rename()', () => {
})
const promise = testHandler.rename('oldPath', 'newPath')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -105,8 +107,8 @@ describe('rmdir()', () => {
})
const promise = testHandler.rmdir('dir')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -115,14 +117,14 @@ describe('encryption', () => {
beforeEach(async () => {
dir = await pFromCallback(cb => tmp.dir(cb))
})
afterAll(async () => {
after(async () => {
await rimraf(dir)
})
it('sync should NOT create metadata if missing (not encrypted)', async () => {
await Disposable.use(getSyncedHandler({ url: `file://${dir}` }), noop)
expect(await fs.readdir(dir)).toEqual([])
assert.deepEqual(await fs.readdir(dir), [])
})
it('sync should create metadata if missing (encrypted)', async () => {
@@ -131,12 +133,12 @@ describe('encryption', () => {
noop
)
expect(await fs.readdir(dir)).toEqual(['encryption.json', 'metadata.json'])
assert.deepEqual(await fs.readdir(dir), ['encryption.json', 'metadata.json'])
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
// encrypted , should not be parsable
expect(async () => JSON.parse(await fs.readFile(`${dir}/metadata.json`))).rejects.toThrowError()
assert.rejects(async () => JSON.parse(await fs.readFile(`${dir}/metadata.json`)))
})
it('sync should not modify existing metadata', async () => {
@@ -146,9 +148,9 @@ describe('encryption', () => {
await Disposable.use(await getSyncedHandler({ url: `file://${dir}` }), noop)
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual('none')
assert.equal(encryption.algorithm, 'none')
const metadata = JSON.parse(await fs.readFile(`${dir}/metadata.json`, 'utf-8'))
expect(metadata.random).toEqual('NOTSORANDOM')
assert.equal(metadata.random, 'NOTSORANDOM')
})
it('should modify metadata if empty', async () => {
@@ -160,11 +162,11 @@ describe('encryption', () => {
noop
)
let encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
await Disposable.use(getSyncedHandler({ url: `file://${dir}` }), noop)
encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual('none')
assert.equal(encryption.algorithm, 'none')
})
it(
@@ -178,9 +180,9 @@ describe('encryption', () => {
const handler = yield getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd91"` })
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
const metadata = JSON.parse(await handler.readFile(`./metadata.json`))
expect(metadata.random).toEqual('NOTSORANDOM')
assert.equal(metadata.random, 'NOTSORANDOM')
})
)
@@ -198,9 +200,9 @@ describe('encryption', () => {
// remote is now non empty : can't modify key anymore
await fs.writeFile(`${dir}/nonempty.json`, 'content')
await expect(
await assert.rejects(
Disposable.use(getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd10"` }), noop)
).rejects.toThrowError()
)
})
it('sync should fail when changing algorithm', async () => {
@@ -213,8 +215,8 @@ describe('encryption', () => {
// remote is now non empty : can't modify key anymore
await fs.writeFile(`${dir}/nonempty.json`, 'content')
await expect(
await assert.rejects(
Disposable.use(getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd91"` }), noop)
).rejects.toThrowError()
)
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { after, afterEach, before, beforeEach, describe, it } from 'test'
import { strict as assert } from 'assert'
import 'dotenv/config'
import { forOwn, random } from 'lodash'
@@ -53,11 +54,11 @@ handlers.forEach(url => {
})
}
beforeAll(async () => {
before(async () => {
handler = getHandler({ url }).addPrefix(`xo-fs-tests-${Date.now()}`)
await handler.sync()
})
afterAll(async () => {
after(async () => {
await handler.forget()
handler = undefined
})
@@ -72,67 +73,63 @@ handlers.forEach(url => {
describe('#type', () => {
it('returns the type of the remote', () => {
expect(typeof handler.type).toBe('string')
assert.equal(typeof handler.type, 'string')
})
})
describe('#getInfo()', () => {
let info
beforeAll(async () => {
before(async () => {
info = await handler.getInfo()
})
it('should return an object with info', async () => {
expect(typeof info).toBe('object')
assert.equal(typeof info, 'object')
})
it('should return correct type of attribute', async () => {
if (info.size !== undefined) {
expect(typeof info.size).toBe('number')
assert.equal(typeof info.size, 'number')
}
if (info.used !== undefined) {
expect(typeof info.used).toBe('number')
assert.equal(typeof info.used, 'number')
}
})
})
describe('#getSize()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
before(() => handler.outputFile('file', TEST_DATA))
testWithFileDescriptor('file', 'r', async () => {
expect(await handler.getSize('file')).toEqual(TEST_DATA_LEN)
assert.equal(await handler.getSize('file'), TEST_DATA_LEN)
})
})
describe('#list()', () => {
it(`should list the content of folder`, async () => {
await handler.outputFile('file', TEST_DATA)
await expect(await handler.list('.')).toEqual(['file'])
assert.deepEqual(await handler.list('.'), ['file'])
})
it('can prepend the directory to entries', async () => {
await handler.outputFile('dir/file', '')
expect(await handler.list('dir', { prependDir: true })).toEqual(['/dir/file'])
})
it('can prepend the directory to entries', async () => {
await handler.outputFile('dir/file', '')
expect(await handler.list('dir', { prependDir: true })).toEqual(['/dir/file'])
assert.deepEqual(await handler.list('dir', { prependDir: true }), ['/dir/file'])
})
it('throws ENOENT if no such directory', async () => {
expect((await rejectionOf(handler.list('dir'))).code).toBe('ENOENT')
await handler.rmtree('dir')
assert.equal((await rejectionOf(handler.list('dir'))).code, 'ENOENT')
})
it('can returns empty for missing directory', async () => {
expect(await handler.list('dir', { ignoreMissing: true })).toEqual([])
assert.deepEqual(await handler.list('dir', { ignoreMissing: true }), [])
})
})
describe('#mkdir()', () => {
it('creates a directory', async () => {
await handler.mkdir('dir')
await expect(await handler.list('.')).toEqual(['dir'])
assert.deepEqual(await handler.list('.'), ['dir'])
})
it('does not throw on existing directory', async () => {
@@ -143,15 +140,15 @@ handlers.forEach(url => {
it('throws ENOTDIR on existing file', async () => {
await handler.outputFile('file', '')
const error = await rejectionOf(handler.mkdir('file'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
})
describe('#mktree()', () => {
it('creates a tree of directories', async () => {
await handler.mktree('dir/dir')
await expect(await handler.list('.')).toEqual(['dir'])
await expect(await handler.list('dir')).toEqual(['dir'])
assert.deepEqual(await handler.list('.'), ['dir'])
assert.deepEqual(await handler.list('dir'), ['dir'])
})
it('does not throw on existing directory', async () => {
@@ -162,26 +159,27 @@ handlers.forEach(url => {
it('throws ENOTDIR on existing file', async () => {
await handler.outputFile('dir/file', '')
const error = await rejectionOf(handler.mktree('dir/file'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
it('throws ENOTDIR on existing file in path', async () => {
await handler.outputFile('file', '')
const error = await rejectionOf(handler.mktree('file/dir'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
})
describe('#outputFile()', () => {
it('writes data to a file', async () => {
await handler.outputFile('file', TEST_DATA)
expect(await handler.readFile('file')).toEqual(TEST_DATA)
assert.deepEqual(await handler.readFile('file'), TEST_DATA)
})
it('throws on existing files', async () => {
await handler.unlink('file')
await handler.outputFile('file', '')
const error = await rejectionOf(handler.outputFile('file', ''))
expect(error.code).toBe('EEXIST')
assert.equal(error.code, 'EEXIST')
})
it("shouldn't timeout in case of the respect of the parallel execution restriction", async () => {
@@ -192,7 +190,7 @@ handlers.forEach(url => {
})
describe('#read()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
before(() => handler.outputFile('file', TEST_DATA))
const start = random(TEST_DATA_LEN)
const size = random(TEST_DATA_LEN)
@@ -200,8 +198,8 @@ handlers.forEach(url => {
testWithFileDescriptor('file', 'r', async ({ file }) => {
const buffer = Buffer.alloc(size)
const result = await handler.read(file, buffer, start)
expect(result.buffer).toBe(buffer)
expect(result).toEqual({
assert.deepEqual(result.buffer, buffer)
assert.deepEqual(result, {
buffer,
bytesRead: Math.min(size, TEST_DATA_LEN - start),
})
@@ -211,12 +209,13 @@ handlers.forEach(url => {
describe('#readFile', () => {
it('returns a buffer containing the contents of the file', async () => {
await handler.outputFile('file', TEST_DATA)
expect(await handler.readFile('file')).toEqual(TEST_DATA)
assert.deepEqual(await handler.readFile('file'), TEST_DATA)
})
it('throws on missing file', async () => {
await handler.unlink('file')
const error = await rejectionOf(handler.readFile('file'))
expect(error.code).toBe('ENOENT')
assert.equal(error.code, 'ENOENT')
})
})
@@ -225,19 +224,19 @@ handlers.forEach(url => {
await handler.outputFile('file', TEST_DATA)
await handler.rename('file', `file2`)
expect(await handler.list('.')).toEqual(['file2'])
expect(await handler.readFile(`file2`)).toEqual(TEST_DATA)
assert.deepEqual(await handler.list('.'), ['file2'])
assert.deepEqual(await handler.readFile(`file2`), TEST_DATA)
})
it(`should rename the file and create dest directory`, async () => {
await handler.outputFile('file', TEST_DATA)
await handler.rename('file', `sub/file2`)
expect(await handler.list('sub')).toEqual(['file2'])
expect(await handler.readFile(`sub/file2`)).toEqual(TEST_DATA)
assert.deepEqual(await handler.list('sub'), ['file2'])
assert.deepEqual(await handler.readFile(`sub/file2`), TEST_DATA)
})
it(`should fail with enoent if source file is missing`, async () => {
const error = await rejectionOf(handler.rename('file', `sub/file2`))
expect(error.code).toBe('ENOENT')
assert.equal(error.code, 'ENOENT')
})
})
@@ -245,14 +244,15 @@ handlers.forEach(url => {
it('should remove an empty directory', async () => {
await handler.mkdir('dir')
await handler.rmdir('dir')
expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
it(`should throw on non-empty directory`, async () => {
await handler.outputFile('dir/file', '')
const error = await rejectionOf(handler.rmdir('.'))
await expect(error.code).toEqual('ENOTEMPTY')
assert.equal(error.code, 'ENOTEMPTY')
await handler.unlink('dir/file')
})
it('does not throw on missing directory', async () => {
@@ -265,7 +265,7 @@ handlers.forEach(url => {
await handler.outputFile('dir/file', '')
await handler.rmtree('dir')
expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
})
@@ -273,9 +273,9 @@ handlers.forEach(url => {
it('tests the remote appears to be working', async () => {
const answer = await handler.test()
expect(answer.success).toBe(true)
expect(typeof answer.writeRate).toBe('number')
expect(typeof answer.readRate).toBe('number')
assert.equal(answer.success, true)
assert.equal(typeof answer.writeRate, 'number')
assert.equal(typeof answer.readRate, 'number')
})
})
@@ -284,7 +284,7 @@ handlers.forEach(url => {
await handler.outputFile('file', TEST_DATA)
await handler.unlink('file')
await expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
it('does not throw on missing file', async () => {
@@ -294,6 +294,7 @@ handlers.forEach(url => {
describe('#write()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
afterEach(() => handler.unlink('file'))
const PATCH_DATA_LEN = Math.ceil(TEST_DATA_LEN / 2)
const PATCH_DATA = unsecureRandomBytes(PATCH_DATA_LEN)
@@ -322,7 +323,7 @@ handlers.forEach(url => {
describe(title, () => {
testWithFileDescriptor('file', 'r+', async ({ file }) => {
await handler.write(file, PATCH_DATA, offset)
await expect(await handler.readFile('file')).toEqual(expected)
assert.deepEqual(await handler.readFile('file'), expected)
})
})
}
@@ -330,6 +331,7 @@ handlers.forEach(url => {
})
describe('#truncate()', () => {
afterEach(() => handler.unlink('file'))
forOwn(
{
'shrinks file': (() => {
@@ -348,7 +350,7 @@ handlers.forEach(url => {
it(title, async () => {
await handler.outputFile('file', TEST_DATA)
await handler.truncate('file', length)
await expect(await handler.readFile('file')).toEqual(expected)
assert.deepEqual(await handler.readFile('file'), expected)
})
}
)

View File

@@ -34,11 +34,14 @@ function dontAddSyncStackTrace(fn, ...args) {
}
export default class LocalHandler extends RemoteHandlerAbstract {
#addSyncStackTrace
#retriesOnEagain
constructor(remote, opts = {}) {
super(remote)
this._addSyncStackTrace = opts.syncStackTraces ?? true ? addSyncStackTrace : dontAddSyncStackTrace
this._retriesOnEagain = {
this.#addSyncStackTrace = opts.syncStackTraces ?? true ? addSyncStackTrace : dontAddSyncStackTrace
this.#retriesOnEagain = {
delay: 1e3,
retries: 9,
...opts.retriesOnEagain,
@@ -51,26 +54,26 @@ export default class LocalHandler extends RemoteHandlerAbstract {
return 'file'
}
_getRealPath() {
getRealPath() {
return this._remote.path
}
_getFilePath(file) {
return this._getRealPath() + file
getFilePath(file) {
return this.getRealPath() + file
}
async _closeFile(fd) {
return this._addSyncStackTrace(fs.close, fd)
return this.#addSyncStackTrace(fs.close, fd)
}
async _copy(oldPath, newPath) {
return this._addSyncStackTrace(fs.copy, this._getFilePath(oldPath), this._getFilePath(newPath))
return this.#addSyncStackTrace(fs.copy, this.getFilePath(oldPath), this.getFilePath(newPath))
}
async _createReadStream(file, options) {
if (typeof file === 'string') {
const stream = fs.createReadStream(this._getFilePath(file), options)
await this._addSyncStackTrace(fromEvent, stream, 'open')
const stream = fs.createReadStream(this.getFilePath(file), options)
await this.#addSyncStackTrace(fromEvent, stream, 'open')
return stream
}
return fs.createReadStream('', {
@@ -82,8 +85,8 @@ export default class LocalHandler extends RemoteHandlerAbstract {
async _createWriteStream(file, options) {
if (typeof file === 'string') {
const stream = fs.createWriteStream(this._getFilePath(file), options)
await this._addSyncStackTrace(fromEvent, stream, 'open')
const stream = fs.createWriteStream(this.getFilePath(file), options)
await this.#addSyncStackTrace(fromEvent, stream, 'open')
return stream
}
return fs.createWriteStream('', {
@@ -98,7 +101,7 @@ export default class LocalHandler extends RemoteHandlerAbstract {
// filesystem, type, size, used, available, capacity and mountpoint.
// size, used, available and capacity may be `NaN` so we remove any `NaN`
// value from the object.
const info = await df.file(this._getFilePath('/'))
const info = await df.file(this.getFilePath('/'))
Object.keys(info).forEach(key => {
if (Number.isNaN(info[key])) {
delete info[key]
@@ -109,16 +112,16 @@ export default class LocalHandler extends RemoteHandlerAbstract {
}
async _getSize(file) {
const stats = await this._addSyncStackTrace(fs.stat, this._getFilePath(typeof file === 'string' ? file : file.path))
const stats = await this.#addSyncStackTrace(fs.stat, this.getFilePath(typeof file === 'string' ? file : file.path))
return stats.size
}
async _list(dir) {
return this._addSyncStackTrace(fs.readdir, this._getFilePath(dir))
return this.#addSyncStackTrace(fs.readdir, this.getFilePath(dir))
}
async _lock(path) {
const acquire = lockfile.lock.bind(undefined, this._getFilePath(path), {
const acquire = lockfile.lock.bind(undefined, this.getFilePath(path), {
async onCompromised(error) {
warn('lock compromised', { error })
try {
@@ -130,11 +133,11 @@ export default class LocalHandler extends RemoteHandlerAbstract {
},
})
let release = await this._addSyncStackTrace(acquire)
let release = await this.#addSyncStackTrace(acquire)
return async () => {
try {
await this._addSyncStackTrace(release)
await this.#addSyncStackTrace(release)
} catch (error) {
warn('lock could not be released', { error })
}
@@ -142,18 +145,18 @@ export default class LocalHandler extends RemoteHandlerAbstract {
}
_mkdir(dir, { mode }) {
return this._addSyncStackTrace(fs.mkdir, this._getFilePath(dir), { mode })
return this.#addSyncStackTrace(fs.mkdir, this.getFilePath(dir), { mode })
}
async _openFile(path, flags) {
return this._addSyncStackTrace(fs.open, this._getFilePath(path), flags)
return this.#addSyncStackTrace(fs.open, this.getFilePath(path), flags)
}
async _read(file, buffer, position) {
const needsClose = typeof file === 'string'
file = needsClose ? await this._addSyncStackTrace(fs.open, this._getFilePath(file), 'r') : file.fd
file = needsClose ? await this.#addSyncStackTrace(fs.open, this.getFilePath(file), 'r') : file.fd
try {
return await this._addSyncStackTrace(
return await this.#addSyncStackTrace(
fs.read,
file,
buffer,
@@ -163,44 +166,44 @@ export default class LocalHandler extends RemoteHandlerAbstract {
)
} finally {
if (needsClose) {
await this._addSyncStackTrace(fs.close, file)
await this.#addSyncStackTrace(fs.close, file)
}
}
}
async _readFile(file, options) {
const filePath = this._getFilePath(file)
return await this._addSyncStackTrace(retry, () => fs.readFile(filePath, options), this._retriesOnEagain)
const filePath = this.getFilePath(file)
return await this.#addSyncStackTrace(retry, () => fs.readFile(filePath, options), this.#retriesOnEagain)
}
async _rename(oldPath, newPath) {
return this._addSyncStackTrace(fs.rename, this._getFilePath(oldPath), this._getFilePath(newPath))
return this.#addSyncStackTrace(fs.rename, this.getFilePath(oldPath), this.getFilePath(newPath))
}
async _rmdir(dir) {
return this._addSyncStackTrace(fs.rmdir, this._getFilePath(dir))
return this.#addSyncStackTrace(fs.rmdir, this.getFilePath(dir))
}
async _sync() {
const path = this._getRealPath('/')
await this._addSyncStackTrace(fs.ensureDir, path)
await this._addSyncStackTrace(fs.access, path, fs.R_OK | fs.W_OK)
const path = this.getRealPath('/')
await this.#addSyncStackTrace(fs.ensureDir, path)
await this.#addSyncStackTrace(fs.access, path, fs.R_OK | fs.W_OK)
}
_truncate(file, len) {
return this._addSyncStackTrace(fs.truncate, this._getFilePath(file), len)
return this.#addSyncStackTrace(fs.truncate, this.getFilePath(file), len)
}
async _unlink(file) {
const filePath = this._getFilePath(file)
return await this._addSyncStackTrace(retry, () => fs.unlink(filePath), this._retriesOnEagain)
const filePath = this.getFilePath(file)
return await this.#addSyncStackTrace(retry, () => fs.unlink(filePath), this.#retriesOnEagain)
}
_writeFd(file, buffer, position) {
return this._addSyncStackTrace(fs.write, file.fd, buffer, 0, buffer.length, position)
return this.#addSyncStackTrace(fs.write, file.fd, buffer, 0, buffer.length, position)
}
_writeFile(file, data, { flags }) {
return this._addSyncStackTrace(fs.writeFile, this._getFilePath(file), data, { flag: flags })
return this.#addSyncStackTrace(fs.writeFile, this.getFilePath(file), data, { flag: flags })
}
}

View File

@@ -34,6 +34,10 @@ const MAX_PART_SIZE = 1024 * 1024 * 1024 * 5 // 5GB
const { warn } = createLogger('xo:fs:s3')
export default class S3Handler extends RemoteHandlerAbstract {
#bucket
#dir
#s3
constructor(remote, _opts) {
super(remote)
const {
@@ -46,7 +50,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
region = guessAwsRegion(host),
} = parse(remote.url)
this._s3 = new S3Client({
this.#s3 = new S3Client({
apiVersion: '2006-03-01',
endpoint: `${protocol}://${host}`,
forcePathStyle: true,
@@ -69,27 +73,27 @@ export default class S3Handler extends RemoteHandlerAbstract {
})
// Workaround for https://github.com/aws/aws-sdk-js-v3/issues/2673
this._s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this._s3.config))
this.#s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this.#s3.config))
const parts = split(path)
this._bucket = parts.shift()
this._dir = join(...parts)
this.#bucket = parts.shift()
this.#dir = join(...parts)
}
get type() {
return 's3'
}
_makeCopySource(path) {
return join(this._bucket, this._dir, path)
#makeCopySource(path) {
return join(this.#bucket, this.#dir, path)
}
_makeKey(file) {
return join(this._dir, file)
#makeKey(file) {
return join(this.#dir, file)
}
_makePrefix(dir) {
const prefix = join(this._dir, dir, '/')
#makePrefix(dir) {
const prefix = join(this.#dir, dir, '/')
// no prefix for root
if (prefix !== './') {
@@ -97,20 +101,20 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
}
_createParams(file) {
return { Bucket: this._bucket, Key: this._makeKey(file) }
#createParams(file) {
return { Bucket: this.#bucket, Key: this.#makeKey(file) }
}
async _multipartCopy(oldPath, newPath) {
async #multipartCopy(oldPath, newPath) {
const size = await this._getSize(oldPath)
const CopySource = this._makeCopySource(oldPath)
const multipartParams = await this._s3.send(new CreateMultipartUploadCommand({ ...this._createParams(newPath) }))
const CopySource = this.#makeCopySource(oldPath)
const multipartParams = await this.#s3.send(new CreateMultipartUploadCommand({ ...this.#createParams(newPath) }))
try {
const parts = []
let start = 0
while (start < size) {
const partNumber = parts.length + 1
const upload = await this._s3.send(
const upload = await this.#s3.send(
new UploadPartCopyCommand({
...multipartParams,
CopySource,
@@ -121,31 +125,31 @@ export default class S3Handler extends RemoteHandlerAbstract {
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partNumber })
start += MAX_PART_SIZE
}
await this._s3.send(
await this.#s3.send(
new CompleteMultipartUploadCommand({
...multipartParams,
MultipartUpload: { Parts: parts },
})
)
} catch (e) {
await this._s3.send(new AbortMultipartUploadCommand(multipartParams))
await this.#s3.send(new AbortMultipartUploadCommand(multipartParams))
throw e
}
}
async _copy(oldPath, newPath) {
const CopySource = this._makeCopySource(oldPath)
const CopySource = this.#makeCopySource(oldPath)
try {
await this._s3.send(
await this.#s3.send(
new CopyObjectCommand({
...this._createParams(newPath),
...this.#createParams(newPath),
CopySource,
})
)
} catch (e) {
// object > 5GB must be copied part by part
if (e.name === 'EntityTooLarge') {
return this._multipartCopy(oldPath, newPath)
return this.#multipartCopy(oldPath, newPath)
}
// normalize this error code
if (e.name === 'NoSuchKey') {
@@ -159,20 +163,20 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
}
async _isNotEmptyDir(path) {
const result = await this._s3.send(
async #isNotEmptyDir(path) {
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
MaxKeys: 1,
Prefix: this._makePrefix(path),
Prefix: this.#makePrefix(path),
})
)
return result.Contents?.length > 0
}
async _isFile(path) {
async #isFile(path) {
try {
await this._s3.send(new HeadObjectCommand(this._createParams(path)))
await this.#s3.send(new HeadObjectCommand(this.#createParams(path)))
return true
} catch (error) {
if (error.name === 'NotFound') {
@@ -189,9 +193,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
pipeline(input, Body, () => {})
const upload = new Upload({
client: this._s3,
client: this.#s3,
params: {
...this._createParams(path),
...this.#createParams(path),
Body,
},
})
@@ -202,7 +206,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
try {
await validator.call(this, path)
} catch (error) {
await this.unlink(path)
await this.__unlink(path)
throw error
}
}
@@ -224,9 +228,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
},
})
async _writeFile(file, data, options) {
return this._s3.send(
return this.#s3.send(
new PutObjectCommand({
...this._createParams(file),
...this.#createParams(file),
Body: data,
})
)
@@ -234,7 +238,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
async _createReadStream(path, options) {
try {
return (await this._s3.send(new GetObjectCommand(this._createParams(path)))).Body
return (await this.#s3.send(new GetObjectCommand(this.#createParams(path)))).Body
} catch (e) {
if (e.name === 'NoSuchKey') {
const error = new Error(`ENOENT: no such file '${path}'`)
@@ -247,9 +251,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _unlink(path) {
await this._s3.send(new DeleteObjectCommand(this._createParams(path)))
await this.#s3.send(new DeleteObjectCommand(this.#createParams(path)))
if (await this._isNotEmptyDir(path)) {
if (await this.#isNotEmptyDir(path)) {
const error = new Error(`EISDIR: illegal operation on a directory, unlink '${path}'`)
error.code = 'EISDIR'
error.path = path
@@ -260,12 +264,12 @@ export default class S3Handler extends RemoteHandlerAbstract {
async _list(dir) {
let NextContinuationToken
const uniq = new Set()
const Prefix = this._makePrefix(dir)
const Prefix = this.#makePrefix(dir)
do {
const result = await this._s3.send(
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
Prefix,
Delimiter: '/',
// will only return path until delimiters
@@ -295,7 +299,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _mkdir(path) {
if (await this._isFile(path)) {
if (await this.#isFile(path)) {
const error = new Error(`ENOTDIR: file already exists, mkdir '${path}'`)
error.code = 'ENOTDIR'
error.path = path
@@ -306,15 +310,15 @@ export default class S3Handler extends RemoteHandlerAbstract {
// s3 doesn't have a rename operation, so copy + delete source
async _rename(oldPath, newPath) {
await this.copy(oldPath, newPath)
await this._s3.send(new DeleteObjectCommand(this._createParams(oldPath)))
await this.__copy(oldPath, newPath)
await this.#s3.send(new DeleteObjectCommand(this.#createParams(oldPath)))
}
async _getSize(file) {
if (typeof file !== 'string') {
file = file.fd
}
const result = await this._s3.send(new HeadObjectCommand(this._createParams(file)))
const result = await this.#s3.send(new HeadObjectCommand(this.#createParams(file)))
return +result.ContentLength
}
@@ -322,15 +326,15 @@ export default class S3Handler extends RemoteHandlerAbstract {
if (typeof file !== 'string') {
file = file.fd
}
const params = this._createParams(file)
const params = this.#createParams(file)
params.Range = `bytes=${position}-${position + buffer.length - 1}`
try {
const result = await this._s3.send(new GetObjectCommand(params))
const result = await this.#s3.send(new GetObjectCommand(params))
const bytesRead = await copyStreamToBuffer(result.Body, buffer)
return { bytesRead, buffer }
} catch (e) {
if (e.name === 'NoSuchKey') {
if (await this._isNotEmptyDir(file)) {
if (await this.#isNotEmptyDir(file)) {
const error = new Error(`${file} is a directory`)
error.code = 'EISDIR'
error.path = file
@@ -342,7 +346,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _rmdir(path) {
if (await this._isNotEmptyDir(path)) {
if (await this.#isNotEmptyDir(path)) {
const error = new Error(`ENOTEMPTY: directory not empty, rmdir '${path}`)
error.code = 'ENOTEMPTY'
error.path = path
@@ -356,11 +360,11 @@ export default class S3Handler extends RemoteHandlerAbstract {
// @todo : use parallel processing for unlink
async _rmtree(path) {
let NextContinuationToken
const Prefix = this._makePrefix(path)
const Prefix = this.#makePrefix(path)
do {
const result = await this._s3.send(
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
Prefix,
ContinuationToken: NextContinuationToken,
})
@@ -372,9 +376,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
async ({ Key }) => {
// _unlink will add the prefix, but Key contains everything
// also we don't need to check if we delete a directory, since the list only return files
await this._s3.send(
await this.#s3.send(
new DeleteObjectCommand({
Bucket: this._bucket,
Bucket: this.#bucket,
Key,
})
)

View File

@@ -15,6 +15,8 @@
- Add a star icon near the pool master (PR [#6712](https://github.com/vatesfr/xen-orchestra/pull/6712))
- Display an error message if the data cannot be fetched (PR [#6525](https://github.com/vatesfr/xen-orchestra/pull/6525))
- Add "Under Construction" views (PR [#6673](https://github.com/vatesfr/xen-orchestra/pull/6673))
- Ability to change the state of selected VMs from the pool's list of VMs (PR [#6782](https://github.com/vatesfr/xen-orchestra/pull/6782))
- Ability to copy selected VMs from the pool's list of VMs (PR [#6847](https://github.com/vatesfr/xen-orchestra/pull/6847))
## **0.1.0**

View File

@@ -157,35 +157,6 @@ export const useFoobarStore = defineStore("foobar", () => {
});
```
#### Xen Api Collection Stores
When creating a store for a Xen Api objects collection, use the `createXenApiCollectionStoreContext` helper.
```typescript
export const useConsoleStore = defineStore("console", () =>
createXenApiCollectionStoreContext("console")
);
```
##### Extending the base context
Here is how to extend the base context:
```typescript
import { computed } from "vue";
export const useFoobarStore = defineStore("foobar", () => {
const baseContext = createXenApiCollectionStoreContext("foobar");
const myCustomGetter = computed(() => baseContext.ids.reverse());
return {
...baseContext,
myCustomGetter,
};
});
```
### I18n
Internationalization of the app is done with [Vue-i18n](https://vue-i18n.intlify.dev/).

View File

@@ -0,0 +1,144 @@
# Stores for XenApiRecord collections
All collections of `XenApiRecord` are stored inside the `xapiCollectionStore`.
To retrieve a collection, invoke `useXapiCollectionStore().get(type)`.
## Accessing a collection
In order to use a collection, you'll need to subscribe to it.
```typescript
const consoleStore = useXapiCollectionStore().get("console");
const { records, getByUuid /* ... */ } = consoleStore.subscribe();
```
## Deferred subscription
If you wish to initialize the subscription on demand, you can pass `{ immediate: false }` as options to `subscribe()`.
```typescript
const consoleStore = useXapiCollectionStore().get("console");
const { records, start, isStarted /* ... */ } = consoleStore.subscribe({
immediate: false,
});
// Later, you can then use start() to initialize the subscription.
```
## Create a dedicated store for a collection
To create a dedicated store for a specific `XenApiRecord`, simply return the collection from the XAPI Collection Store:
```typescript
export const useConsoleStore = defineStore("console", () =>
useXapiCollectionStore().get("console")
);
```
## Extending the base Subscription
To extend the base Subscription, you'll need to override the `subscribe` method.
For that, you can use the `createSubscribe<XenApiRecord, Extensions>((options) => { /* ... */})` helper.
### Define the extensions
Subscription extensions are defined as `(object | [object, RequiredOptions])[]`.
When using a tuple (`[object, RequiredOptions]`), the corresponding `object` type will be added to the subscription if
the `RequiredOptions` for that tuple are present in the options passed to `subscribe`.
```typescript
// Always present extension
type DefaultExtension = {
propA: string;
propB: ComputedRef<number>;
};
// Conditional extension 1
type FirstConditionalExtension = [
{ propC: ComputedRef<string> }, // <- This signature will be added
{ optC: string } // <- if this condition is met
];
// Conditional extension 2
type SecondConditionalExtension = [
{ propD: () => void }, // <- This signature will be added
{ optD: number } // <- if this condition is met
];
// Create the extensions array
type Extensions = [
DefaultExtension,
FirstConditionalExtension,
SecondConditionalExtension
];
```
### Define the subscription
```typescript
export const useConsoleStore = defineStore("console", () => {
const consoleCollection = useXapiCollectionStore().get("console");
const subscribe = createSubscribe<XenApiConsole, Extensions>((options) => {
const originalSubscription = consoleCollection.subscribe(options);
const extendedSubscription = {
propA: "Some string",
propB: computed(() => 42),
};
const propCSubscription = options?.optC !== undefined && {
propC: computed(() => "Some other string"),
};
const propDSubscription = options?.optD !== undefined && {
propD: () => console.log("Hello"),
};
return {
...originalSubscription,
...extendedSubscription,
...propCSubscription,
...propDSubscription,
};
});
return {
...consoleCollection,
subscribe,
};
});
```
The generated `subscribe` method will then automatically have the following `options` signature:
```typescript
type Options = {
immediate?: false;
optC?: string;
optD?: number;
};
```
### Use the subscription
In each case, all the default properties (`records`, `getByUuid`, etc.) will be present.
```typescript
const store = useConsoleStore();
// No options (propA and propB will be present)
const subscription = store.subscribe();
// optC option (propA, propB and propC will be present)
const subscription = store.subscribe({ optC: "Hello" });
// optD option (propA, propB and propD will be present)
const subscription = store.subscribe({ optD: 12 });
// optC and optD options (propA, propB, propC and propD will be present)
const subscription = store.subscribe({ optC: "Hello", optD: 12 });
```

View File

@@ -19,8 +19,8 @@
"@types/d3-time-format": "^4.0.0",
"@types/lodash-es": "^4.17.6",
"@types/marked": "^4.0.8",
"@vueuse/core": "^9.5.0",
"@vueuse/math": "^9.5.0",
"@vueuse/core": "^10.1.2",
"@vueuse/math": "^10.1.2",
"complex-matcher": "^0.7.0",
"d3-time-format": "^4.1.0",
"decorator-synchronized": "^0.6.0",
@@ -34,19 +34,19 @@
"lodash-es": "^4.17.21",
"make-error": "^1.3.6",
"marked": "^4.2.12",
"pinia": "^2.0.14",
"pinia": "^2.1.2",
"placement.js": "^1.0.0-beta.5",
"vue": "^3.2.37",
"vue": "^3.3.4",
"vue-echarts": "^6.2.3",
"vue-i18n": "9",
"vue-router": "^4.0.16"
"vue-i18n": "^9.2.2",
"vue-router": "^4.2.1"
},
"devDependencies": {
"@intlify/vite-plugin-vue-i18n": "^6.0.1",
"@intlify/unplugin-vue-i18n": "^0.10.0",
"@limegrass/eslint-plugin-import-alias": "^1.0.5",
"@rushstack/eslint-patch": "^1.1.0",
"@types/node": "^16.11.41",
"@vitejs/plugin-vue": "^3.2.0",
"@vitejs/plugin-vue": "^4.2.3",
"@vue/eslint-config-prettier": "^7.0.0",
"@vue/eslint-config-typescript": "^11.0.0",
"@vue/tsconfig": "^0.1.3",
@@ -56,9 +56,9 @@
"postcss-custom-media": "^9.0.1",
"postcss-nested": "^6.0.0",
"typescript": "^4.9.3",
"vite": "^3.2.4",
"vite-plugin-pages": "^0.27.1",
"vue-tsc": "^1.0.9"
"vite": "^4.3.8",
"vite-plugin-pages": "^0.29.1",
"vue-tsc": "^1.6.5"
},
"private": true,
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/lite",

View File

@@ -1,25 +1,5 @@
<template>
<UiModal
v-if="isSslModalOpen"
:icon="faServer"
color="error"
@close="clearUnreachableHostsUrls"
>
<template #title>{{ $t("unreachable-hosts") }}</template>
<template #subtitle>{{ $t("following-hosts-unreachable") }}</template>
<p>{{ $t("allow-self-signed-ssl") }}</p>
<ul>
<li v-for="url in unreachableHostsUrls" :key="url.hostname">
<a :href="url.href" rel="noopener" target="_blank">{{ url.href }}</a>
</li>
</ul>
<template #buttons>
<UiButton color="success" @click="reload">
{{ $t("unreachable-hosts-reload-page") }}
</UiButton>
<UiButton @click="clearUnreachableHostsUrls">{{ $t("cancel") }}</UiButton>
</template>
</UiModal>
<UnreachableHostsModal />
<div v-if="!$route.meta.hasStoryNav && !xenApiStore.isConnected">
<AppLogin />
</div>
@@ -41,21 +21,14 @@ import AppHeader from "@/components/AppHeader.vue";
import AppLogin from "@/components/AppLogin.vue";
import AppNavigation from "@/components/AppNavigation.vue";
import AppTooltips from "@/components/AppTooltips.vue";
import UiButton from "@/components/ui/UiButton.vue";
import UiModal from "@/components/ui/UiModal.vue";
import UnreachableHostsModal from "@/components/UnreachableHostsModal.vue";
import { useChartTheme } from "@/composables/chart-theme.composable";
import { useHostStore } from "@/stores/host.store";
import { usePoolStore } from "@/stores/pool.store";
import { useUiStore } from "@/stores/ui.store";
import { useXenApiStore } from "@/stores/xen-api.store";
import { faServer } from "@fortawesome/free-solid-svg-icons";
import { useActiveElement, useMagicKeys, whenever } from "@vueuse/core";
import { logicAnd } from "@vueuse/math";
import { difference } from "lodash-es";
import { computed, ref, watch } from "vue";
const unreachableHostsUrls = ref<URL[]>([]);
const clearUnreachableHostsUrls = () => (unreachableHostsUrls.value = []);
import { computed } from "vue";
let link = document.querySelector(
"link[rel~='icon']"
@@ -70,7 +43,6 @@ link.href = favicon;
document.title = "XO Lite";
const xenApiStore = useXenApiStore();
const { records: hosts } = useHostStore().subscribe();
const { pool } = usePoolStore().subscribe();
useChartTheme();
const uiStore = useUiStore();
@@ -93,17 +65,6 @@ if (import.meta.env.DEV) {
);
}
watch(hosts, (hosts, previousHosts) => {
difference(hosts, previousHosts).forEach((host) => {
const url = new URL("http://localhost");
url.protocol = window.location.protocol;
url.hostname = host.address;
fetch(url, { mode: "no-cors" }).catch(() =>
unreachableHostsUrls.value.push(url)
);
});
});
whenever(
() => pool.value?.$ref,
async (poolRef) => {
@@ -112,9 +73,6 @@ whenever(
await xenApi.startWatch();
}
);
const isSslModalOpen = computed(() => unreachableHostsUrls.value.length > 0);
const reload = () => window.location.reload();
</script>
<style lang="postcss">

View File

@@ -1,15 +1,15 @@
<template>
<div v-if="!isDisabled" ref="tooltipElement" class="app-tooltip">
<span class="triangle" />
<span class="label">{{ content }}</span>
<span class="label">{{ options.content }}</span>
</div>
</template>
<script lang="ts" setup>
import { isEmpty, isFunction, isString } from "lodash-es";
import type { TooltipOptions } from "@/stores/tooltip.store";
import { isString } from "lodash-es";
import place from "placement.js";
import { computed, ref, watchEffect } from "vue";
import type { TooltipOptions } from "@/stores/tooltip.store";
const props = defineProps<{
target: HTMLElement;
@@ -18,29 +18,13 @@ const props = defineProps<{
const tooltipElement = ref<HTMLElement>();
const content = computed(() =>
isString(props.options) ? props.options : props.options.content
const isDisabled = computed(() =>
isString(props.options.content)
? props.options.content.trim() === ""
: props.options.content === false
);
const isDisabled = computed(() => {
if (isEmpty(content.value)) {
return true;
}
if (isString(props.options)) {
return false;
}
if (isFunction(props.options.disabled)) {
return props.options.disabled(props.target);
}
return props.options.disabled ?? false;
});
const placement = computed(() =>
isString(props.options) ? "top" : props.options.placement ?? "top"
);
const placement = computed(() => props.options.placement ?? "top");
watchEffect(() => {
if (tooltipElement.value) {

View File

@@ -14,7 +14,12 @@
</UiActionButton>
</UiFilterGroup>
<UiModal v-if="isOpen" :icon="faFilter" @submit.prevent="handleSubmit">
<UiModal
v-if="isOpen"
:icon="faFilter"
@submit.prevent="handleSubmit"
@close="handleCancel"
>
<div class="rows">
<CollectionFilterRow
v-for="(newFilter, index) in newFilters"

View File

@@ -17,7 +17,12 @@
</UiActionButton>
</UiFilterGroup>
<UiModal v-if="isOpen" :icon="faSort" @submit.prevent="handleSubmit">
<UiModal
v-if="isOpen"
:icon="faSort"
@submit.prevent="handleSubmit"
@close="handleCancel"
>
<div class="form-widgets">
<FormWidget :label="$t('sort-by')">
<select v-model="newSortProperty">

View File

@@ -4,7 +4,7 @@
<script lang="ts" setup>
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import type { PowerState } from "@/libs/xen-api";
import { POWER_STATE } from "@/libs/xen-api";
import {
faMoon,
faPause,
@@ -15,14 +15,14 @@ import {
import { computed } from "vue";
const props = defineProps<{
state: PowerState;
state: POWER_STATE;
}>();
const icons = {
Running: faPlay,
Paused: faPause,
Suspended: faMoon,
Halted: faStop,
[POWER_STATE.RUNNING]: faPlay,
[POWER_STATE.PAUSED]: faPause,
[POWER_STATE.SUSPENDED]: faMoon,
[POWER_STATE.HALTED]: faStop,
};
const icon = computed(() => icons[props.state] ?? faQuestion);

View File

@@ -4,7 +4,7 @@
<script lang="ts" setup>
import { fibonacci } from "iterable-backoff";
import { computed, onBeforeUnmount, ref, watch, watchEffect } from "vue";
import { computed, onBeforeUnmount, ref, watchEffect } from "vue";
import VncClient from "@novnc/novnc/core/rfb";
import { useXenApiStore } from "@/stores/xen-api.store";
import { promiseTimeout } from "@vueuse/shared";
@@ -87,7 +87,6 @@ const createVncConnection = async () => {
vncClient.addEventListener("connect", handleConnectionEvent);
};
watch(url, clearVncClient);
watchEffect(() => {
if (
url.value === undefined ||
@@ -98,6 +97,8 @@ watchEffect(() => {
}
nConnectionAttempts = 0;
clearVncClient();
createVncConnection();
});

View File

@@ -0,0 +1,59 @@
<template>
<UiModal
v-if="isSslModalOpen"
:icon="faServer"
color="error"
@close="clearUnreachableHostsUrls"
>
<template #title>{{ $t("unreachable-hosts") }}</template>
<div class="description">
<p>{{ $t("following-hosts-unreachable") }}</p>
<p>{{ $t("allow-self-signed-ssl") }}</p>
<ul>
<li v-for="url in unreachableHostsUrls" :key="url">
<a :href="url" class="link" rel="noopener" target="_blank">{{
url
}}</a>
</li>
</ul>
</div>
<template #buttons>
<UiButton color="success" @click="reload">
{{ $t("unreachable-hosts-reload-page") }}
</UiButton>
<UiButton @click="clearUnreachableHostsUrls">{{ $t("cancel") }}</UiButton>
</template>
</UiModal>
</template>
<script lang="ts" setup>
import { faServer } from "@fortawesome/free-solid-svg-icons";
import UiModal from "@/components/ui/UiModal.vue";
import UiButton from "@/components/ui/UiButton.vue";
import { computed, ref, watch } from "vue";
import { difference } from "lodash";
import { useHostStore } from "@/stores/host.store";
const { records: hosts } = useHostStore().subscribe();
const unreachableHostsUrls = ref<Set<string>>(new Set());
const clearUnreachableHostsUrls = () => unreachableHostsUrls.value.clear();
const isSslModalOpen = computed(() => unreachableHostsUrls.value.size > 0);
const reload = () => window.location.reload();
watch(hosts, (nextHosts, previousHosts) => {
difference(nextHosts, previousHosts).forEach((host) => {
const url = new URL("http://localhost");
url.protocol = window.location.protocol;
url.hostname = host.address;
fetch(url, { mode: "no-cors" }).catch(() =>
unreachableHostsUrls.value.add(url.toString())
);
});
});
</script>
<style lang="postcss" scoped>
.description p {
margin: 1rem 0;
}
</style>

View File

@@ -4,11 +4,11 @@
<div
v-for="item in computedData.sortedArray"
:key="item.id"
class="progress-item"
:class="{
warning: item.value > MIN_WARNING_VALUE,
error: item.value > MIN_DANGEROUS_VALUE,
}"
class="progress-item"
>
<UiProgressBar :value="item.value" color="custom" />
<UiProgressLegend
@@ -18,15 +18,15 @@
</div>
<slot :total-percent="computedData.totalPercentUsage" name="footer" />
</template>
<UiSpinner v-else class="spinner" />
<UiCardSpinner v-else />
</div>
</template>
<script lang="ts" setup>
import { computed } from "vue";
import UiProgressBar from "@/components/ui/progress/UiProgressBar.vue";
import UiProgressLegend from "@/components/ui/progress/UiProgressLegend.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import { computed } from "vue";
interface Data {
id: string;
@@ -67,14 +67,6 @@ const computedData = computed(() => {
</script>
<style lang="postcss" scoped>
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: auto;
width: 40px;
height: 40px;
}
.progress-item:nth-child(1) {
--progress-bar-color: var(--color-extra-blue-d60);
}
@@ -91,9 +83,11 @@ const computedData = computed(() => {
--progress-bar-height: 1.2rem;
--progress-bar-color: var(--color-extra-blue-l20);
--progress-bar-background-color: var(--color-blue-scale-400);
&.warning {
--progress-bar-color: var(--color-orange-world-base);
}
&.error {
--progress-bar-color: var(--color-red-vates-base);
}

View File

@@ -18,33 +18,19 @@
</component>
</template>
<script lang="ts">
export default {
name: "FormCheckbox",
inheritAttrs: false,
};
</script>
<script lang="ts" setup>
import {
type HTMLAttributes,
type InputHTMLAttributes,
computed,
inject,
ref,
} from "vue";
import { type HTMLAttributes, computed, inject, ref } from "vue";
import { faCheck, faCircle, faMinus } from "@fortawesome/free-solid-svg-icons";
import { useVModel } from "@vueuse/core";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
// Temporary workaround for https://github.com/vuejs/core/issues/4294
interface Props extends Omit<InputHTMLAttributes, ""> {
defineOptions({ inheritAttrs: false });
const props = defineProps<{
modelValue?: unknown;
disabled?: boolean;
wrapperAttrs?: HTMLAttributes;
}
const props = defineProps<Props>();
}>();
const emit = defineEmits<{
(event: "update:modelValue", value: boolean): void;

View File

@@ -44,17 +44,9 @@
</span>
</template>
<script lang="ts">
export default {
name: "FormInput",
inheritAttrs: false,
};
</script>
<script lang="ts" setup>
import {
type HTMLAttributes,
type InputHTMLAttributes,
computed,
inject,
nextTick,
@@ -67,20 +59,22 @@ import { faAngleDown } from "@fortawesome/free-solid-svg-icons";
import { useTextareaAutosize, useVModel } from "@vueuse/core";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
// Temporary workaround for https://github.com/vuejs/core/issues/4294
interface Props extends Omit<InputHTMLAttributes, ""> {
modelValue?: unknown;
color?: Color;
before?: Omit<IconDefinition, ""> | string;
after?: Omit<IconDefinition, ""> | string;
beforeWidth?: string;
afterWidth?: string;
disabled?: boolean;
right?: boolean;
wrapperAttrs?: HTMLAttributes;
}
defineOptions({ inheritAttrs: false });
const props = withDefaults(defineProps<Props>(), { color: "info" });
const props = withDefaults(
defineProps<{
modelValue?: any;
color?: Color;
before?: IconDefinition | string;
after?: IconDefinition | string;
beforeWidth?: string;
afterWidth?: string;
disabled?: boolean;
right?: boolean;
wrapperAttrs?: HTMLAttributes;
}>(),
{ color: "info" }
);
const inputElement = ref();

View File

@@ -0,0 +1,41 @@
<template>
<div class="form-input-group">
<slot />
</div>
</template>
<style lang="postcss" scoped>
.form-input-group {
display: inline-flex;
align-items: center;
:slotted(.form-input),
:slotted(.form-select) {
&:hover {
z-index: 1;
}
&:focus-within {
z-index: 2;
}
&:not(:first-child) {
margin-left: -1px;
.input,
.select {
border-top-left-radius: 0;
border-bottom-left-radius: 0;
}
}
&:not(:last-child) {
.input,
.select {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
}
}
}
</style>

View File

@@ -1,12 +1,5 @@
<template>
<li
v-if="host !== undefined"
v-tooltip="{
content: host.name_label,
disabled: isTooltipDisabled,
}"
class="infra-host-item"
>
<li v-if="host !== undefined" class="infra-host-item">
<InfraItemLabel
:active="isCurrentHost"
:icon="faServer"
@@ -36,7 +29,6 @@ import InfraAction from "@/components/infra/InfraAction.vue";
import InfraItemLabel from "@/components/infra/InfraItemLabel.vue";
import InfraVmList from "@/components/infra/InfraVmList.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { useHostStore } from "@/stores/host.store";
import { usePoolStore } from "@/stores/pool.store";
import { useUiStore } from "@/stores/ui.store";
@@ -66,9 +58,6 @@ const isCurrentHost = computed(
() => props.hostOpaqueRef === uiStore.currentHostOpaqueRef
);
const [isExpanded, toggle] = useToggle(true);
const isTooltipDisabled = (target: HTMLElement) =>
!hasEllipsis(target.querySelector(".text"));
</script>
<style lang="postcss" scoped>

View File

@@ -7,9 +7,9 @@
class="infra-item-label"
v-bind="$attrs"
>
<a :href="href" class="link" @click="navigate">
<a :href="href" class="link" @click="navigate" v-tooltip="hasTooltip">
<UiIcon :icon="icon" class="icon" />
<div class="text">
<div ref="textElement" class="text">
<slot />
</div>
</a>
@@ -22,7 +22,10 @@
<script lang="ts" setup>
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import type { IconDefinition } from "@fortawesome/fontawesome-common-types";
import { computed, ref } from "vue";
import type { RouteLocationRaw } from "vue-router";
defineProps<{
@@ -30,6 +33,9 @@ defineProps<{
route: RouteLocationRaw;
active?: boolean;
}>();
const textElement = ref<HTMLElement>();
const hasTooltip = computed(() => hasEllipsis(textElement.value));
</script>
<style lang="postcss" scoped>

View File

@@ -1,13 +1,5 @@
<template>
<li
v-if="vm !== undefined"
ref="rootElement"
v-tooltip="{
content: vm.name_label,
disabled: isTooltipDisabled,
}"
class="infra-vm-item"
>
<li v-if="vm !== undefined" ref="rootElement" class="infra-vm-item">
<InfraItemLabel
v-if="isVisible"
:icon="faDisplay"
@@ -27,8 +19,6 @@
import InfraAction from "@/components/infra/InfraAction.vue";
import InfraItemLabel from "@/components/infra/InfraItemLabel.vue";
import PowerStateIcon from "@/components/PowerStateIcon.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { useVmStore } from "@/stores/vm.store";
import { faDisplay } from "@fortawesome/free-solid-svg-icons";
import { useIntersectionObserver } from "@vueuse/core";
@@ -49,9 +39,6 @@ const { stop } = useIntersectionObserver(rootElement, ([entry]) => {
stop();
}
});
const isTooltipDisabled = (target: HTMLElement) =>
!hasEllipsis(target.querySelector(".text"));
</script>
<style lang="postcss" scoped>

View File

@@ -1,6 +1,6 @@
<template>
<slot :is-open="isOpen" :open="open" name="trigger" />
<Teleport to="body" :disabled="!isRoot || !slots.trigger">
<Teleport to="body" :disabled="!slots.trigger">
<ul
v-if="!$slots.trigger || isOpen"
ref="menu"
@@ -24,8 +24,11 @@ const props = defineProps<{
disabled?: boolean;
placement?: Options["placement"];
}>();
const isRoot = inject("isMenuRoot", true);
provide("isMenuRoot", false);
defineOptions({
inheritAttrs: false,
});
const slots = useSlots();
const isOpen = ref(false);
const menu = ref();

View File

@@ -1,13 +1,14 @@
<template>
<UiCard>
<UiCard :color="hasError ? 'error' : undefined">
<UiCardTitle>
{{ $t("cpu-provisioning") }}
<template #right>
<template v-if="!hasError" #right>
<!-- TODO: add a tooltip for the warning icon -->
<UiStatusIcon v-if="state !== 'success'" :state="state" />
</template>
</UiCardTitle>
<div v-if="isReady" :class="state" class="progress-item">
<NoDataError v-if="hasError" />
<div v-else-if="isReady" :class="state" class="progress-item">
<UiProgressBar :max-value="maxValue" :value="value" color="custom" />
<UiProgressScale :max-value="maxValue" :steps="1" unit="%" />
<UiProgressLegend :label="$t('vcpus')" :value="`${value}%`" />
@@ -22,20 +23,22 @@
</template>
</UiCardFooter>
</div>
<UiSpinner v-else class="spinner" />
<UiCardSpinner v-else />
</UiCard>
</template>
<script lang="ts" setup>
import NoDataError from "@/components/NoDataError.vue";
import UiStatusIcon from "@/components/ui/icon/UiStatusIcon.vue";
import UiProgressBar from "@/components/ui/progress/UiProgressBar.vue";
import UiProgressLegend from "@/components/ui/progress/UiProgressLegend.vue";
import UiProgressScale from "@/components/ui/progress/UiProgressScale.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiCardFooter from "@/components/ui/UiCardFooter.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import { percent } from "@/libs/utils";
import { POWER_STATE } from "@/libs/xen-api";
import { useHostMetricsStore } from "@/stores/host-metrics.store";
import { useHostStore } from "@/stores/host.store";
import { useVmMetricsStore } from "@/stores/vm-metrics.store";
@@ -43,13 +46,21 @@ import { useVmStore } from "@/stores/vm.store";
import { logicAnd } from "@vueuse/math";
import { computed } from "vue";
const ACTIVE_STATES = new Set(["Running", "Paused"]);
const ACTIVE_STATES = new Set([POWER_STATE.RUNNING, POWER_STATE.PAUSED]);
const { isReady: isHostStoreReady, runningHosts } = useHostStore().subscribe({
const {
hasError: hostStoreHasError,
isReady: isHostStoreReady,
runningHosts,
} = useHostStore().subscribe({
hostMetricsSubscription: useHostMetricsStore().subscribe(),
});
const { records: vms, isReady: isVmStoreReady } = useVmStore().subscribe();
const {
hasError: vmStoreHasError,
isReady: isVmStoreReady,
records: vms,
} = useVmStore().subscribe();
const { getByOpaqueRef: getVmMetrics, isReady: isVmMetricsStoreReady } =
useVmMetricsStore().subscribe();
@@ -84,6 +95,9 @@ const isReady = logicAnd(
isHostStoreReady,
isVmMetricsStoreReady
);
const hasError = computed(
() => hostStoreHasError.value || vmStoreHasError.value
);
</script>
<style lang="postcss" scoped>
@@ -102,12 +116,4 @@ const isReady = logicAnd(
color: var(--footer-value-color);
}
}
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: 2.6rem auto auto auto;
width: 40px;
height: 40px;
}
</style>

View File

@@ -2,7 +2,7 @@
<UiCard :color="hasError ? 'error' : undefined">
<UiCardTitle>{{ $t("status") }}</UiCardTitle>
<NoDataError v-if="hasError" />
<UiSpinner v-else-if="!isReady" class="spinner" />
<UiCardSpinner v-else-if="!isReady" />
<template v-else>
<PoolDashboardStatusItem
:active="activeHostsCount"
@@ -23,9 +23,9 @@
import NoDataError from "@/components/NoDataError.vue";
import PoolDashboardStatusItem from "@/components/pool/dashboard/PoolDashboardStatusItem.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
import UiSeparator from "@/components/ui/UiSeparator.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import { useHostMetricsStore } from "@/stores/host-metrics.store";
import { useVmStore } from "@/stores/vm.store";
import { computed } from "vue";
@@ -57,13 +57,3 @@ const totalVmsCount = computed(() => vms.value.length);
const activeVmsCount = computed(() => runningVms.value.length);
</script>
<style lang="postcss" scoped>
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: auto;
width: 40px;
height: 40px;
}
</style>

View File

@@ -1,5 +1,5 @@
<template>
<UiTable class="tasks-table">
<UiTable class="tasks-table" :color="hasError ? 'error' : undefined">
<thead>
<tr>
<th>{{ $t("name") }}</th>
@@ -10,13 +10,25 @@
</tr>
</thead>
<tbody>
<TaskRow
v-for="task in pendingTasks"
:key="task.uuid"
:task="task"
is-pending
/>
<TaskRow v-for="task in finishedTasks" :key="task.uuid" :task="task" />
<tr v-if="hasError">
<td colspan="5">
<span class="text-error">{{ $t("error-no-data") }}</span>
</td>
</tr>
<tr v-else-if="isFetching">
<td colspan="5">
<UiSpinner class="loader" />
</td>
</tr>
<template v-else>
<TaskRow
v-for="task in pendingTasks"
:key="task.uuid"
:task="task"
is-pending
/>
<TaskRow v-for="task in finishedTasks" :key="task.uuid" :task="task" />
</template>
</tbody>
</UiTable>
</template>
@@ -24,12 +36,34 @@
<script lang="ts" setup>
import TaskRow from "@/components/tasks/TaskRow.vue";
import UiTable from "@/components/ui/UiTable.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import { useTaskStore } from "@/stores/task.store";
import type { XenApiTask } from "@/libs/xen-api";
defineProps<{
pendingTasks: XenApiTask[];
finishedTasks: XenApiTask[];
}>();
const { hasError, isFetching } = useTaskStore().subscribe();
</script>
<style lang="postcss" scoped></style>
<style lang="postcss" scoped>
td[colspan="5"] {
text-align: center;
}
.text-error {
font-weight: 700;
font-size: 16px;
line-height: 150%;
color: var(--color-red-vates-base);
}
.loader {
color: var(--color-extra-blue-base);
display: block;
font-size: 4rem;
margin: 2rem auto 0;
}
</style>

View File

@@ -16,6 +16,7 @@ defineProps<{
<style lang="postcss" scoped>
.ui-badge {
white-space: nowrap;
display: inline-flex;
align-items: center;
gap: 0.4rem;

View File

@@ -12,7 +12,6 @@ defineProps<{
<style lang="postcss" scoped>
.ui-card {
height: fit-content;
padding: 2.1rem;
border-radius: 0.8rem;
background-color: var(--background-color-primary);

View File

@@ -0,0 +1,30 @@
<template>
<UiCard class="ui-card-coming-soon">
<UiCardTitle>{{ title }}</UiCardTitle>
<div class="content">
<img alt="" src="@/assets/under-construction.svg" />
</div>
<div class="content">Coming soon</div>
</UiCard>
</template>
<script setup lang="ts">
import UiCard from "@/components/ui/UiCard.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
defineProps<{
title: string;
}>();
</script>
<style scoped lang="postcss">
.ui-card-coming-soon {
display: flex;
flex-direction: column;
}
.content {
padding: 1rem 0;
text-align: center;
}
</style>

View File

@@ -0,0 +1,28 @@
<template>
<div :class="{ vertical }" class="ui-card-group">
<slot />
</div>
</template>
<script lang="ts" setup>
import { inject, provide } from "vue";
const vertical = inject("isCardGroupVertical", false);
provide("isCardGroupVertical", !vertical);
</script>
<style lang="postcss" scoped>
.ui-card-group {
display: flex;
gap: 1rem;
flex-direction: column;
flex: 1;
}
@media (min-width: 1500px) {
.ui-card-group:not(.vertical) {
flex-direction: row;
}
}
</style>

View File

@@ -0,0 +1,23 @@
<template>
<div class="ui-card-spinner">
<UiSpinner class="spinner" />
</div>
</template>
<script lang="ts" setup>
import UiSpinner from "@/components/ui/UiSpinner.vue";
</script>
<style lang="postcss" scoped>
.ui-card-spinner {
display: flex;
align-items: center;
justify-content: center;
padding: 4rem 0;
}
.spinner {
color: var(--color-extra-blue-base);
font-size: 4rem;
}
</style>

View File

@@ -1,11 +1,15 @@
<template>
<table :class="{ 'vertical-border': verticalBorder }" class="ui-table">
<table
:class="{ 'vertical-border': verticalBorder, error: color === 'error' }"
class="ui-table"
>
<slot />
</table>
</template>
<script lang="ts" setup>
defineProps<{
color?: "error";
verticalBorder?: boolean;
}>();
</script>
@@ -52,4 +56,8 @@ defineProps<{
}
}
}
.error {
background-color: var(--background-color-red-vates);
}
</style>

View File

@@ -1,7 +1,13 @@
<template>
<div class="legend">
<span class="circle" />
<slot name="label">{{ label }}</slot>
<template v-if="$slots.label || label">
<span class="circle" />
<div class="label-container">
<div ref="labelElement" v-tooltip="isTooltipEnabled" class="label">
<slot name="label">{{ label }}</slot>
</div>
</div>
</template>
<UiBadge class="badge">
<slot name="value">{{ value }}</slot>
</UiBadge>
@@ -10,14 +16,23 @@
<script lang="ts" setup>
import UiBadge from "@/components/ui/UiBadge.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { computed, ref } from "vue";
defineProps<{
label?: string;
value?: string;
}>();
const labelElement = ref<HTMLElement>();
const isTooltipEnabled = computed(() =>
hasEllipsis(labelElement.value, { vertical: true })
);
</script>
<style scoped lang="postcss">
<style lang="postcss" scoped>
.badge {
font-size: 0.9em;
font-weight: 700;
@@ -25,8 +40,8 @@ defineProps<{
.circle {
display: inline-block;
width: 1rem;
height: 1rem;
min-width: 1rem;
min-height: 1rem;
border-radius: 0.5rem;
background-color: var(--progress-bar-color);
}
@@ -38,4 +53,14 @@ defineProps<{
gap: 0.5rem;
margin: 1.6em 0;
}
.label-container {
overflow: hidden;
}
.label {
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
}
</style>

Some files were not shown because too many files have changed in this diff Show More