Compare commits

..

130 Commits

Author SHA1 Message Date
Thierry
fb02eb3486 feat(lite/xapi-subscriptions): add an immediate option + TypeScript enhancement
- `subscribe({ immediate: false })` allows to defer the subscription
- Extracted typing to its own file
- Enhanced `subscribe` signature with overloading
- Enhanced Host/VM stores `subscribe` signature and typing
2023-05-31 15:20:32 +02:00
Julien Fontanet
26131917e3 feat(xo-web): 5.119.1 2023-05-31 11:22:12 +02:00
Mathieu
44a0ab6d0a fix(xo-web/overview): fix isMirrorBackup is not defined (#6870) 2023-05-31 11:06:03 +02:00
Julien Fontanet
2b8b033ad7 feat: technical release 2023-05-31 09:51:53 +02:00
Julien Fontanet
3ee0b3e7df feat(xo-web): 5.119.0 2023-05-31 09:47:42 +02:00
Julien Fontanet
927a55ab30 feat(xo-server): 5.116.0 2023-05-31 09:46:41 +02:00
Julien Fontanet
b70721cb60 feat(@xen-orchestra/proxy): 0.26.25 2023-05-31 09:44:14 +02:00
Julien Fontanet
f71c820f15 feat(@xen-orchestra/backups-cli): 1.0.8 2023-05-31 09:43:59 +02:00
Julien Fontanet
74e0405a5e feat(@xen-orchestra/backups): 0.38.0 2023-05-31 09:40:48 +02:00
Julien Fontanet
79b55ba30a feat(vhd-lib): 4.5.0 2023-05-31 09:36:01 +02:00
Mathieu
ee0adaebc5 feat(xo-web/backup): UI mirror backup implementation (#6858)
See #6854
2023-05-31 09:12:46 +02:00
Julien Fontanet
83c5c976e3 feat(xo-server/rest-api): limit patches listing and RPU (#6864)
Same restriction as in the UI.
2023-05-31 08:49:32 +02:00
Julien Fontanet
18bd2c607e feat(xo-server/backupNg.checkBackup): add basic XO task 2023-05-30 16:51:43 +02:00
Julien Fontanet
e2695ce327 fix(xo-server/clearHost): explicit message on missing migration network
Fixes zammad#14882
2023-05-30 16:50:50 +02:00
Florent BEAUCHAMP
3f316fcaea fix(backups): handles task end in CR without health check (#6866) 2023-05-30 16:06:23 +02:00
Florent BEAUCHAMP
8b7b162c76 feat(backups): implement mirror backup 2023-05-30 15:21:53 +02:00
Florent BEAUCHAMP
aa36629def refactor(backup/writers): pass the vm and snapshot in transfer/run 2023-05-30 15:21:53 +02:00
Pierre Donias
ca345bd6d8 feat(xo-web/task): action to open task REST API URL (#6869) 2023-05-30 14:19:50 +02:00
Florent BEAUCHAMP
61324d10f9 fix(xo-web): VHD directory tooltip (#6865) 2023-05-30 09:27:24 +02:00
Pierre Donias
92fd92ae63 feat(xo-web): XO Tasks (#6861) 2023-05-30 09:20:51 +02:00
Julien Fontanet
e48bfa2c88 feat: technical release 2023-05-26 16:50:04 +02:00
Julien Fontanet
cd5762fa19 feat(xo-web): 5.118.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
71f7a6cd6c feat(xo-server): 5.115.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8cade8b7a feat(xo-cli): 0.19.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
696c6f13f0 feat(vhd-cli): 0.9.3 2023-05-26 16:38:38 +02:00
Julien Fontanet
b8d923d3ba feat(xo-vmdk-to-vhd): 2.5.5 2023-05-26 16:38:38 +02:00
Julien Fontanet
1a96c1bf0f feat(@xen-orchestra/proxy): 0.26.24 2023-05-26 16:38:38 +02:00
Julien Fontanet
14a01d0141 feat(@xen-orchestra/mixins): 0.10.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
74a2a4d2e5 feat(@xen-orchestra/backups-cli): 1.0.7 2023-05-26 16:38:38 +02:00
Julien Fontanet
b13b44cfd0 feat(@xen-orchestra/backups): 0.37.0 2023-05-26 16:38:38 +02:00
Julien Fontanet
50a164423a feat(@xen-orchestra/xapi): 2.2.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
a40d50a3bd feat(vhd-lib): 4.4.1 2023-05-26 16:38:38 +02:00
Julien Fontanet
529e33140a feat(@xen-orchestra/fs): 4.0.0 2023-05-26 16:38:38 +02:00
Mathieu
132b1a41db fix(xo-web/host-item): display alert in host-item for host inconsistent time (#6833)
See xoa-support#14626
Introduced by aadc1bb84c
2023-05-26 16:17:04 +02:00
Julien Fontanet
75948b2977 feat(xo-server/rest-api): endpoints to list pools/hosts missing patches 2023-05-26 16:11:11 +02:00
Gabriel Gunullu
eb84d4a7ef feat(xo-web/kubernetes): add number of cp choice (#6809)
See xoa#120
2023-05-26 16:08:11 +02:00
Julien Fontanet
1816d0240e refactor(fs): separate internal and public interfaces
Public interfaces may be decorated with behaviors (e.g. concurrency limits, path rewriting) which
makes them unsuitable from being called from inside the class or its children.

Internal interfaces are now prefixed with `__`.
2023-05-26 15:32:56 +02:00
Julien Fontanet
2c6d36b63e refactor(fs): use private fields where appropriate 2023-05-26 15:32:56 +02:00
Mathieu
d9776ae8ed fix(xo-web): fix various 'an error has occurred' (#6848)
See xoa-support#14631
2023-05-26 14:45:29 +02:00
Florent BEAUCHAMP
b456394663 refactor(backups): extract method forkDeltaExport 2023-05-26 13:01:15 +02:00
Florent BEAUCHAMP
94f599bdbd refactor(backups/RemoteAdapter): extract method listAllVms 2023-05-26 13:01:08 +02:00
Florent BEAUCHAMP
d466ca143a refactor(backups/runner): Vms -> VmsXapi 2023-05-26 12:48:56 +02:00
Florent BEAUCHAMP
78ed85a49f feat(backups): add ability to read only one delta instead of the full chain 2023-05-26 12:47:42 +02:00
Florent BEAUCHAMP
c24e7f9ecd refactor(backup/remoteAdapter): readDeltaVmBackup -> readIncrementalVmBackup 2023-05-26 12:24:56 +02:00
Mathieu
98caa89625 feat(xo-web/self): add default tags for self service users (#6810)
See #6812

Add default tags for Self Service users.
2023-05-26 11:45:05 +02:00
Pierre Donias
8e176eadb1 fix(xo-web): show Suse icon when distro name is opensuse (#6852)
See #6676
See #6746
See https://xcp-ng.org/forum/topic/6965
2023-05-26 09:24:30 +02:00
Julien Fontanet
444268406f fix(mixins/Tasks): update updatedAt when marking tasks as interrupted 2023-05-25 16:06:09 +02:00
Thierry Goettelmann
7e062977d0 feat(lite/component): add new Vue component UiCardSpinner (#6806)
`UiSpinner` is often used to add a spinner inside an `UiCard`, applying similar
styles. This `UiCardSpinner` component creates a homogeneous spinner to use in
theses cases.
2023-05-25 14:00:23 +02:00
Mathieu
f4bf56f159 feat(xo-web/self): ability to share VMs by default (#6838)
See xoa-support#7420
2023-05-25 11:00:04 +02:00
Julien Fontanet
9f3b020361 fix(xo-server): create collection after connected to Redis
Introduced by 36b94f745

Redis is now connected in `start core` hook and should not be used before.

Some minor initialization stuff (namespace and version registration) where failing silently before
this fix.
2023-05-24 17:40:20 +02:00
Julien Fontanet
ef35021a44 chore(backups,xo-server): use extractOpaqueRef from @xen-orchestra/xapi
Instead of custom implementations.
2023-05-24 12:09:42 +02:00
Julien Fontanet
b74ebd050a feat(xapi/extractOpaqueRef): expose it publicly 2023-05-24 12:07:54 +02:00
Julien Fontanet
8a16d6aa3b feat(xapi/extractOpaqueRef): add searched string to error
Helps debugging.
2023-05-24 12:07:22 +02:00
Julien Fontanet
cf7393992c chore(xapi/extractOpaqueRef): named function for better stacktraces 2023-05-24 12:05:56 +02:00
Thierry Goettelmann
c576114dad feat(lite): new FormInputGroup component (#6740) 2023-05-23 16:58:39 +02:00
Julien Fontanet
deeb399046 feat(xo-server/rest-api): rolling_update pool action 2023-05-23 15:35:32 +02:00
Julien Fontanet
9cf8f8f492 chore(xo-server/rest-api): also pass xoObject to actions 2023-05-23 15:35:32 +02:00
Julien Fontanet
28b7e99ebc chore(xo-server): move RPU logic from API layer to XenServers mixin 2023-05-23 15:35:32 +02:00
rbarhtaoui
0ba729e5b9 feat(lite/pool/dashboard): display error message when data is not fetched (#6776) 2023-05-23 14:40:43 +02:00
Florent BEAUCHAMP
ac8c146cf7 refactor(backups): separate full and incremental VM runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
2ba437be31 refactor(backups): separate VMs and metadata runners 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
bd8bb73309 refactor(backups): move Runner, VmBackup, writers and specific method to a private folder 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
485c2f4669 refactor(backups/Backup.createRunner): factory
BREAKING CHANGE: Backup can no longer be instantiated directly.
2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
6fb562d92f refactor(backups/Backup): extract getAdaptersByRemote, RemoteTimeoutError and runTasks 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
85efdcf7b9 refactor(backups/_incrementalVm): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
fc1357d5d6 refactor(backups): _deltaVm → _incrementalVm 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
88b015bda4 refactor(backups/writers) : replication → xapi 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
b46f76cccf refactor(backups/writers): backup → remote 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
c3bb2185c2 refactor(backups/writers): delta → incremental 2023-05-23 09:27:47 +02:00
Florent BEAUCHAMP
a240853fe0 refactor(backups/_VmBackup): delta → incremental 2023-05-23 09:27:47 +02:00
Thierry Goettelmann
d7ce609940 chore(lite): upgrade dependencies (#6843) 2023-05-22 10:41:39 +02:00
Florent BEAUCHAMP
1b0ec9839e fix(xo-server): import OVA with broken VMDK size in metadata (#6824)
ova generated from oracle virtualization server seems to have the size of the vmdk
instead of the disk size in the metadata

this will cause the transfer to fail when the import try to write data

after the size of the vmdk, for example a 50GB disk make a 10GB vmdk. It will fail when import reach data in the 10-50GB range
2023-05-22 10:20:04 +02:00
Julien Fontanet
77b166bb3b chore: update dev deps 2023-05-22 10:01:54 +02:00
Julien Fontanet
76bd54d7de chore: update dev deps 2023-05-17 14:48:41 +02:00
Julien Fontanet
684282f0a4 fix(mixins/Tasks): correctly serialize errors 2023-05-17 11:29:28 +02:00
Julien Fontanet
2459f46c19 feat(xo-cli rest): accept query string in path
Example:
```
xo-cli rest post vms/<uuid>/actions/snapshot?sync
```
2023-05-17 11:27:29 +02:00
Julien Fontanet
5f0466e4d8 feat: release 5.82.2 2023-05-17 10:05:11 +02:00
Gabriel Gunullu
3738edfa83 test(@xen-orchestra/fs): from Jest to test (#6820) 2023-05-17 09:54:51 +02:00
Julien Fontanet
769e27e2cb feat: technical release 2023-05-16 16:32:33 +02:00
Julien Fontanet
8ec5461338 feat(xo-server): 5.114.2 2023-05-16 16:31:54 +02:00
Julien Fontanet
4a2843cb67 feat(@xen-orchestra/proxy): 0.26.23 2023-05-16 16:31:33 +02:00
Julien Fontanet
a0e69a79ab feat(xen-api): 1.3.1 2023-05-16 16:30:54 +02:00
Roni Väyrynen
3da94f18df docs(installation): add findmnt command to sudoers config example (#6835) 2023-05-16 15:20:47 +02:00
Mathieu
17cb59b898 feat(xo-web/host-item): display warning for when HVM disabled (#6834) 2023-05-16 14:58:14 +02:00
Mathieu
315e5c9289 feat(xo-web/proxy): make proxy address editable (#6816) 2023-05-16 12:12:31 +02:00
Julien Fontanet
01ba10fedb fix(xen-api/putResource): really fix (302) redirection with non-stream body
Replaces the incorrect fix in 87e6f7fde

Introduced by ab96c549a

Fixes zammad#13375
Fixes zammad#13952
Fixes zammad#14001
2023-05-15 16:23:18 +02:00
Mathieu
13e7594560 fix(xo-web/SortedTable): handle pending state for collapsed actions (#6831) 2023-05-15 15:27:17 +02:00
Thierry Goettelmann
f9ac2ac84d feat(lite/tooltips): enhance and simplify tooltips (#6760)
- Removed the `disabled` option.
- The tooltip is now disabled when content is an empty string or `false`.
- If content is `true` or `undefined`, it will be extracted from element's `innerText`.
- Moved `v-tooltip` from `InfraHostItem` and `InfraVmItem` to `InfraItemLabel`.
2023-05-15 11:55:43 +02:00
Thierry Goettelmann
09cfac1111 feat(lite): enhance Component Story skeleton generator (#6753)
- Updated form to use our own components
- Added a warning for props whose type cannot be extracted
- Fixed setting name for scopes containing a dash
- Handled cases when a prop can be multiple types
- Better guess of prop type
- Remove `.widget()` for `.model()`
- Remove `.event('update:modelValue')` for `.model()`
2023-05-15 11:23:42 +02:00
Thierry Goettelmann
008f7a30fd feat(lite): add VM tab bar (#6766) 2023-05-15 11:15:52 +02:00
Thierry Goettelmann
ff65dbcba7 feat(lite): extract and update "unreachable hosts modal" (#6745)
Extraction of unreachable host modal to its own component + Move the subtitle to the description.

Refer to #6744 for final design.
2023-05-15 11:11:19 +02:00
ggunullu
264a0d1678 fix(@vates/nbd-client): add custom coverage threshold to tap test
By default, Tap require 100 % coverage of all lines, branches, functions and statements.
We enforce a custom threshold to match the current state of the state and avoid regression.

See https://github.com/vatesfr/xen-orchestra/actions/runs/4956232764/jobs/8866437368
2023-05-15 10:18:02 +02:00
ggunullu
7dcaf454ed fix(eslint): treat *.integ.js as test files
Introduced by 3f73138fc3
2023-05-15 10:18:02 +02:00
Julien Fontanet
17b2756291 feat: release 5.82.1 2023-05-12 16:47:21 +02:00
Julien Fontanet
57e48b5d34 feat: technical release 2023-05-12 15:40:38 +02:00
Julien Fontanet
57ed984e5a feat(xo-web): 5.117.1 2023-05-12 15:40:16 +02:00
Julien Fontanet
100122f388 feat(xo-server): 5.114.1 2023-05-12 15:39:36 +02:00
Julien Fontanet
12d4b3396e feat(@xen-orchestra/proxy): 0.26.22 2023-05-12 15:39:16 +02:00
Julien Fontanet
ab35c710cb feat(@xen-orchestra/backups): 0.36.1 2023-05-12 15:38:46 +02:00
Florent BEAUCHAMP
4bd5b38aeb fix(backups): fix health check task during CR (#6830)
Fixes https://xcp-ng.org/forum/post/62073

`healthCheck` is launched after `cleanVm`, therefore it should be closing the parent task, not `cleanVm`.
2023-05-12 10:45:32 +02:00
Julien Fontanet
836db1b807 fix(xo-web/new/network): correct type for vlan (#6829)
BREAKING CHANGE: API method `network.create` no longer accepts a `string` for `vlan` param.

Fixes https://xcp-ng.org/forum/post/62090

Either `number` or `undefined`, not an empty string.
2023-05-12 10:36:59 +02:00
Julien Fontanet
73d88cc5f1 fix(xo-server/vm.convertToTemplate): handle VBD_IS_EMPTY (#6808)
Fixes https://xcp-ng.org/forum/post/61653
2023-05-12 09:12:41 +02:00
Julien Fontanet
3def66d968 chore(xo-vmdk-to-vhd): move notes.md to docs/
So that it will be correctly ignored when publishing the package.
2023-05-12 09:10:00 +02:00
Gabriel Gunullu
3f73138fc3 fix(test-integration): run integration tests only in ci (#6826)
Fixes issues introduced by

- be6233f
- adc5e7d

After the switching from Jest to Tap/Test, those tests were no longer executed during the test-integration script.
2023-05-11 17:47:48 +02:00
Julien Fontanet
bfe621a21d feat: technical release 2023-05-11 14:35:15 +02:00
Julien Fontanet
32fa792eeb feat(xo-web): 5.117.0 2023-05-11 14:23:02 +02:00
Julien Fontanet
a833050fc2 feat(xo-server): 5.114.0 2023-05-11 14:17:40 +02:00
Julien Fontanet
e7e6294bc3 feat(xo-vmdk-to-vhd): 2.5.4 2023-05-11 14:09:23 +02:00
Julien Fontanet
7c71884e27 feat(@vates/task): 0.1.2 2023-05-11 14:03:57 +02:00
Florent BEAUCHAMP
3e822044f2 fix(xo-vmdk-to-vhd): wait for OVA stream to be written before reading more data (#6800) 2023-05-11 12:23:06 +02:00
Julien Fontanet
d457f5fca4 chore(xo-server): use Task.run() helper 2023-05-11 11:10:00 +02:00
Julien Fontanet
1837e01719 fix(xo-server): new Task() now expects data instead of name option
Introduced by 036f3f6bd
2023-05-11 11:08:31 +02:00
Julien Fontanet
f17f5abf0f fix(xo-server/pif.reconfigureIp): accepts empty strings for dns, gateway, ip and netmask params 2023-05-11 09:08:05 +02:00
Florent BEAUCHAMP
82c229c755 fix(xo-server): better handling of importing running VM from ESXi (#6825)
Fixes https://xcp-ng.org/forum/post/59879

Fixes `Cannot read properties of undefined (reading 'stream')` error message
2023-05-10 18:25:37 +02:00
Julien Fontanet
c7e3ba3184 feat(xo-web/plugins): names can be clicked to filter out other plugins 2023-05-10 17:40:11 +02:00
Thierry Goettelmann
470c9bb6c8 fix(lite): handle escape key on CollectionFilter and CollectionSorter modals (#6822)
UiModal `@close` event was not defined on `CollectionFilter` and `CollectionSorter` modals.
2023-05-10 14:44:30 +02:00
Thierry Goettelmann
bb3ab20b2a fix(lite): typo in component name (#6821) 2023-05-10 10:11:06 +02:00
Julien Fontanet
90ce1c4d1e test(task/combineEvents): initial unit tests 2023-05-09 15:16:41 +02:00
Julien Fontanet
5c436f3870 fix(task/combineEvents): defineProperty → defineProperties
Fixes zammad#14566
2023-05-09 15:12:12 +02:00
Mathieu
159339625d feat(xo-server/vm.create): add resourceSet tags to created VM (#6812) 2023-05-09 14:33:59 +02:00
Julien Fontanet
87e6f7fded fix(xen-api/putResource): fix (302) redirection with non-stream body
Fixes zammad#13375
Fixes zammad#13952
Fixes zammad#14001
2023-05-09 14:09:33 +02:00
Pierre Donias
fd2c7c2fc3 fix(CHANGELOG): fix version number (#6805) 2023-04-28 14:52:44 +02:00
Mathieu
7fc76c1df4 feat: release 5.82 (#6804) 2023-04-28 14:32:01 +02:00
Mathieu
f2758d036d feat: technical release (#6803) 2023-04-28 13:28:15 +02:00
Pierre Donias
ac670da793 fix(xo-web/host/smart reboot): XOA Premium only (#6801)
See #6795
2023-04-28 11:15:28 +02:00
Mathieu
c0465eb4d9 feat: technical release (#6799) 2023-04-27 15:12:42 +02:00
Gabriel Gunullu
cea55b03e5 feat(xo-web/kubernetes): add high availability option (#6794)
See xoa#117
2023-04-27 13:54:58 +02:00
Julien Fontanet
d78d802066 fix(xo-server/rest-api): list tasks in the root collection
Introduced by 9e60c5375
2023-04-27 09:20:12 +02:00
Florent BEAUCHAMP
a562c74492 feat(backups/health check): support custom checks via XenStore (#6784) 2023-04-27 09:02:00 +02:00
Julien Fontanet
d1f2e0a84b fix(task): fix start event and add unit tests
Introduced by 6ea671a43
2023-04-26 17:29:35 +02:00
241 changed files with 7037 additions and 3960 deletions

View File

@@ -28,7 +28,7 @@ module.exports = {
},
},
{
files: ['*.{spec,test}.{,c,m}js'],
files: ['*.{integ,spec,test}.{,c,m}js'],
rules: {
'n/no-unpublished-require': 'off',
'n/no-unpublished-import': 'off',

View File

@@ -21,7 +21,7 @@
"fuse-native": "^2.2.6",
"lru-cache": "^7.14.0",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.4.0"
"vhd-lib": "^4.5.0"
},
"scripts": {
"postversion": "npm publish --access public"

View File

@@ -23,7 +23,7 @@
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"promise-toolbox": "^0.21.0",
"xen-api": "^1.3.0"
"xen-api": "^1.3.1"
},
"devDependencies": {
"tap": "^16.3.0",
@@ -31,6 +31,6 @@
},
"scripts": {
"postversion": "npm publish --access public",
"test-integration": "tap *.spec.js"
"test-integration": "tap --lines 70 --functions 36 --branches 54 --statements 69 *.integ.js"
}
}

View File

@@ -48,7 +48,7 @@ exports.makeOnProgress = function ({ onRootTaskEnd = noop, onRootTaskStart = noo
assert.notEqual(parent, undefined)
// inject a (non-enumerable) reference to the parent and the root task
Object.defineProperty(taskLog, { $parent: { value: parent }, $root: { value: parent.$root } })
Object.defineProperties(taskLog, { $parent: { value: parent }, $root: { value: parent.$root } })
;(parent.tasks ?? (parent.tasks = [])).push(taskLog)
}
} else {

View File

@@ -0,0 +1,67 @@
'use strict'
const assert = require('node:assert').strict
const { describe, it } = require('test')
const { makeOnProgress } = require('./combineEvents.js')
const { Task } = require('./index.js')
describe('makeOnProgress()', function () {
it('works', async function () {
const events = []
let log
const task = new Task({
data: { name: 'task' },
onProgress: makeOnProgress({
onRootTaskStart(log_) {
assert.equal(log, undefined)
log = log_
events.push('onRootTaskStart')
},
onRootTaskEnd(log_) {
assert.equal(log_, log)
events.push('onRootTaskEnd')
},
onTaskUpdate(log_) {
assert.equal(log_.$root, log)
events.push('onTaskUpdate')
},
}),
})
assert.equal(events.length, 0)
await task.run(async () => {
assert.equal(events[0], 'onRootTaskStart')
assert.equal(events[1], 'onTaskUpdate')
assert.equal(log.name, 'task')
Task.set('progress', 0)
assert.equal(events[2], 'onTaskUpdate')
assert.equal(log.properties.progress, 0)
Task.info('foo', {})
assert.equal(events[3], 'onTaskUpdate')
assert.deepEqual(log.infos, [{ data: {}, message: 'foo' }])
await Task.run({ data: { name: 'subtask' } }, () => {
assert.equal(events[4], 'onTaskUpdate')
assert.equal(log.tasks[0].name, 'subtask')
Task.warning('bar', {})
assert.equal(events[5], 'onTaskUpdate')
assert.deepEqual(log.tasks[0].warnings, [{ data: {}, message: 'bar' }])
})
assert.equal(events[6], 'onTaskUpdate')
assert.equal(log.tasks[0].status, 'success')
Task.set('progress', 100)
assert.equal(events[7], 'onTaskUpdate')
assert.equal(log.properties.progress, 100)
})
assert.equal(events[8], 'onRootTaskEnd')
assert.equal(events[9], 'onTaskUpdate')
assert.equal(log.status, 'success')
})
})

View File

@@ -83,7 +83,7 @@ exports.Task = class Task {
return this.#status
}
constructor({ data = {}, onProgress }) {
constructor({ data = {}, onProgress } = {}) {
this.#startData = data
if (onProgress !== undefined) {
@@ -106,6 +106,8 @@ exports.Task = class Task {
const { signal } = this.#abortController
signal.addEventListener('abort', () => {
if (this.status === PENDING && !this.#running) {
this.#maybeStart()
const status = ABORTED
this.#status = status
this.#emit('end', { result: signal.reason, status })
@@ -118,16 +120,18 @@ exports.Task = class Task {
}
#emit(type, data) {
data.id = this.id
data.timestamp = Date.now()
data.type = type
this.#onProgress(data)
}
#maybeStart() {
const startData = this.#startData
if (startData !== undefined) {
this.#startData = undefined
this.#emit('start', startData)
}
data.id = this.id
data.timestamp = Date.now()
data.type = type
this.#onProgress(data)
}
async run(fn) {
@@ -145,6 +149,8 @@ exports.Task = class Task {
assert.equal(this.#running, false)
this.#running = true
this.#maybeStart()
try {
const result = await asyncStorage.run(this, fn)
this.#running = false

341
@vates/task/index.test.js Normal file
View File

@@ -0,0 +1,341 @@
'use strict'
const assert = require('node:assert').strict
const { describe, it } = require('test')
const { Task } = require('./index.js')
const noop = Function.prototype
function assertEvent(task, expected, eventIndex = -1) {
const logs = task.$events
const actual = logs[eventIndex < 0 ? logs.length + eventIndex : eventIndex]
assert.equal(typeof actual, 'object')
assert.equal(typeof actual.id, 'string')
assert.equal(typeof actual.timestamp, 'number')
for (const keys of Object.keys(expected)) {
assert.equal(actual[keys], expected[keys])
}
}
// like new Task() but with a custom onProgress which adds event to task.$events
function createTask(opts) {
const events = []
const task = new Task({ ...opts, onProgress: events.push.bind(events) })
task.$events = events
return task
}
describe('Task', function () {
describe('contructor', function () {
it('data properties are passed to the start event', async function () {
const data = { foo: 0, bar: 1 }
const task = createTask({ data })
await task.run(noop)
assertEvent(task, { ...data, type: 'start' }, 0)
})
})
it('subtasks events are passed to root task', async function () {
const task = createTask()
const result = {}
await task.run(async () => {
await new Task().run(() => result)
})
assert.equal(task.$events.length, 4)
assertEvent(task, { type: 'start', parentId: task.id }, 1)
assertEvent(task, { type: 'end', status: 'success', result }, 2)
})
describe('.abortSignal', function () {
it('is undefined when run outside a task', function () {
assert.equal(Task.abortSignal, undefined)
})
it('is the current abort signal when run inside a task', async function () {
const task = createTask()
await task.run(() => {
const { abortSignal } = Task
assert.equal(abortSignal.aborted, false)
task.abort()
assert.equal(abortSignal.aborted, true)
})
})
})
describe('.abort()', function () {
it('aborts if the task throws fails with the abort reason', async function () {
const task = createTask()
const reason = {}
await task
.run(() => {
task.abort(reason)
Task.abortSignal.throwIfAborted()
})
.catch(noop)
assert.equal(task.status, 'aborted')
assert.equal(task.$events.length, 2)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
})
it('does not abort if the task fails without the abort reason', async function () {
const task = createTask()
const result = new Error()
await task
.run(() => {
task.abort({})
throw result
})
.catch(noop)
assert.equal(task.status, 'failure')
assert.equal(task.$events.length, 2)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'failure', result }, 1)
})
it('does not abort if the task succeed', async function () {
const task = createTask()
const result = {}
await task
.run(() => {
task.abort({})
return result
})
.catch(noop)
assert.equal(task.status, 'success')
assert.equal(task.$events.length, 2)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'success', result }, 1)
})
it('aborts before task is running', function () {
const task = createTask()
const reason = {}
task.abort(reason)
assert.equal(task.status, 'aborted')
assert.equal(task.$events.length, 2)
assertEvent(task, { type: 'start' }, 0)
assertEvent(task, { type: 'end', status: 'aborted', result: reason }, 1)
})
})
describe('.info()', function () {
it('does nothing when run outside a task', function () {
Task.info('foo')
})
it('emits an info message when run inside a task', async function () {
const task = createTask()
await task.run(() => {
Task.info('foo')
assertEvent(task, {
data: undefined,
message: 'foo',
type: 'info',
})
})
})
})
describe('.set()', function () {
it('does nothing when run outside a task', function () {
Task.set('progress', 10)
})
it('emits an info message when run inside a task', async function () {
const task = createTask()
await task.run(() => {
Task.set('progress', 10)
assertEvent(task, {
name: 'progress',
type: 'property',
value: 10,
})
})
})
})
describe('.warning()', function () {
it('does nothing when run outside a task', function () {
Task.warning('foo')
})
it('emits an warning message when run inside a task', async function () {
const task = createTask()
await task.run(() => {
Task.warning('foo')
assertEvent(task, {
data: undefined,
message: 'foo',
type: 'warning',
})
})
})
})
describe('#id', function () {
it('can be set', function () {
const task = createTask()
task.id = 'foo'
assert.equal(task.id, 'foo')
})
it('cannot be set more than once', function () {
const task = createTask()
task.id = 'foo'
assert.throws(() => {
task.id = 'bar'
}, TypeError)
})
it('is randomly generated if not set', function () {
assert.notEqual(createTask().id, createTask().id)
})
it('cannot be set after being observed', function () {
const task = createTask()
noop(task.id)
assert.throws(() => {
task.id = 'bar'
}, TypeError)
})
})
describe('#status', function () {
it('starts as pending', function () {
assert.equal(createTask().status, 'pending')
})
it('changes to success when finish without error', async function () {
const task = createTask()
await task.run(noop)
assert.equal(task.status, 'success')
})
it('changes to failure when finish with error', async function () {
const task = createTask()
await task
.run(() => {
throw Error()
})
.catch(noop)
assert.equal(task.status, 'failure')
})
it('changes to aborted after run is complete', async function () {
const task = createTask()
await task
.run(() => {
task.abort()
assert.equal(task.status, 'pending')
Task.abortSignal.throwIfAborted()
})
.catch(noop)
assert.equal(task.status, 'aborted')
})
it('changes to aborted if aborted when not running', async function () {
const task = createTask()
task.abort()
assert.equal(task.status, 'aborted')
})
})
function makeRunTests(run) {
it('starts the task', async function () {
const task = createTask()
await run(task, () => {
assertEvent(task, { type: 'start' })
})
})
it('finishes the task on success', async function () {
const task = createTask()
await run(task, () => 'foo')
assert.equal(task.status, 'success')
assertEvent(task, {
status: 'success',
result: 'foo',
type: 'end',
})
})
it('fails the task on error', async function () {
const task = createTask()
const e = new Error()
await run(task, () => {
throw e
}).catch(noop)
assert.equal(task.status, 'failure')
assertEvent(task, {
status: 'failure',
result: e,
type: 'end',
})
})
}
describe('.run', function () {
makeRunTests((task, fn) => task.run(fn))
})
describe('.wrap', function () {
makeRunTests((task, fn) => task.wrap(fn)())
})
function makeRunInsideTests(run) {
it('starts the task', async function () {
const task = createTask()
await run(task, () => {
assertEvent(task, { type: 'start' })
})
})
it('does not finish the task on success', async function () {
const task = createTask()
await run(task, () => 'foo')
assert.equal(task.status, 'pending')
})
it('fails the task on error', async function () {
const task = createTask()
const e = new Error()
await run(task, () => {
throw e
}).catch(noop)
assert.equal(task.status, 'failure')
assertEvent(task, {
status: 'failure',
result: e,
type: 'end',
})
})
}
describe('.runInside', function () {
makeRunInsideTests((task, fn) => task.runInside(fn))
})
describe('.wrapInside', function () {
makeRunInsideTests((task, fn) => task.wrapInside(fn)())
})
})

View File

@@ -13,12 +13,16 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.0",
"version": "0.1.2",
"engines": {
"node": ">=14"
},
"devDependencies": {
"test": "^3.3.0"
},
"scripts": {
"postversion": "npm publish --access public"
"postversion": "npm publish --access public",
"test": "node--test"
},
"exports": {
".": "./index.js",

View File

@@ -7,8 +7,8 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.35.0",
"@xen-orchestra/fs": "^3.3.4",
"@xen-orchestra/backups": "^0.38.0",
"@xen-orchestra/fs": "^4.0.0",
"filenamify": "^4.1.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
@@ -27,7 +27,7 @@
"scripts": {
"postversion": "npm publish --access public"
},
"version": "1.0.5",
"version": "1.0.8",
"license": "AGPL-3.0-or-later",
"author": {
"name": "Vates SAS",

View File

@@ -1,307 +1,19 @@
'use strict'
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const pTimeout = require('promise-toolbox/timeout')
const { compileTemplate } = require('@xen-orchestra/template')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { Metadata } = require('./_runners/Metadata.js')
const { VmsRemote } = require('./_runners/VmsRemote.js')
const { VmsXapi } = require('./_runners/VmsXapi.js')
const { extractIdsFromSimplePattern } = require('./extractIdsFromSimplePattern.js')
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
const { Task } = require('./Task.js')
const { VmBackup } = require('./_VmBackup.js')
const { XoMetadataBackup } = require('./_XoMetadataBackup.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const noop = Function.prototype
const getAdaptersByRemote = adapters => {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
const DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
const DEFAULT_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}
exports.Backup = class Backup {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const { type } = job
const baseSettings = { ...DEFAULT_SETTINGS }
if (type === 'backup') {
Object.assign(baseSettings, DEFAULT_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
this.run = this._runVmBackup
} else if (type === 'metadataBackup') {
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
this.run = this._runMetadataBackup
} else {
exports.createRunner = function createRunner(opts) {
const { type } = opts.job
switch (type) {
case 'backup':
return new VmsXapi(opts)
case 'mirrorBackup':
return new VmsRemote(opts)
case 'metadataBackup':
return new Metadata(opts)
default:
throw new Error(`No runner for the backup type ${type}`)
}
Object.assign(baseSettings, job.settings[''])
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
async _runMetadataBackup() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
async _runVmBackup() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () =>
new VmBackup({
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}).run()
)
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -3,12 +3,14 @@
const { Task } = require('./Task')
exports.HealthCheckVmBackup = class HealthCheckVmBackup {
#xapi
#restoredVm
#timeout
#xapi
constructor({ restoredVm, xapi }) {
constructor({ restoredVm, timeout = 10 * 60 * 1000, xapi }) {
this.#restoredVm = restoredVm
this.#xapi = xapi
this.#timeout = timeout
}
async run() {
@@ -23,7 +25,12 @@ exports.HealthCheckVmBackup = class HealthCheckVmBackup {
// remove vifs
await Promise.all(restoredVm.$VIFs.map(vif => xapi.callAsync('VIF.destroy', vif.$ref)))
const waitForScript = restoredVm.tags.includes('xo-backup-health-check-xenstore')
if (waitForScript) {
await restoredVm.set_xenstore_data({
'vm-data/xo-backup-health-check': 'planned',
})
}
const start = new Date()
// start Vm
@@ -34,7 +41,7 @@ exports.HealthCheckVmBackup = class HealthCheckVmBackup {
false // Skip pre-boot checks?
)
const started = new Date()
const timeout = 10 * 60 * 1000
const timeout = this.#timeout
const startDuration = started - start
let remainingTimeout = timeout - startDuration
@@ -52,12 +59,52 @@ exports.HealthCheckVmBackup = class HealthCheckVmBackup {
remainingTimeout -= running - started
if (remainingTimeout < 0) {
throw new Error(`local xapi did not get Runnig state for VM ${restoredId} after ${timeout / 1000} second`)
throw new Error(`local xapi did not get Running state for VM ${restoredId} after ${timeout / 1000} second`)
}
// wait for the guest tool version to be defined
await xapi.waitObjectState(restoredVm.guest_metrics, gm => gm?.PV_drivers_version?.major !== undefined, {
timeout: remainingTimeout,
})
const guestToolsReady = new Date()
remainingTimeout -= guestToolsReady - running
if (remainingTimeout < 0) {
throw new Error(`local xapi did not get he guest tools check ${restoredId} after ${timeout / 1000} second`)
}
if (waitForScript) {
const startedRestoredVm = await xapi.waitObjectState(
restoredVm.$ref,
vm =>
vm?.xenstore_data !== undefined &&
(vm.xenstore_data['vm-data/xo-backup-health-check'] === 'success' ||
vm.xenstore_data['vm-data/xo-backup-health-check'] === 'failure'),
{
timeout: remainingTimeout,
}
)
const scriptOk = new Date()
remainingTimeout -= scriptOk - guestToolsReady
if (remainingTimeout < 0) {
throw new Error(
`Backup health check script did not update vm-data/xo-backup-health-check of ${restoredId} after ${
timeout / 1000
} second, got ${
startedRestoredVm.xenstore_data['vm-data/xo-backup-health-check']
} instead of 'success' or 'failure'`
)
}
if (startedRestoredVm.xenstore_data['vm-data/xo-backup-health-check'] !== 'success') {
const message = startedRestoredVm.xenstore_data['vm-data/xo-backup-health-check-error']
if (message) {
throw new Error(`Backup health check script failed with message ${message} for VM ${restoredId} `)
} else {
throw new Error(`Backup health check script failed for VM ${restoredId} `)
}
}
Task.info('Backup health check script successfully executed')
}
}
)
}

View File

@@ -3,14 +3,14 @@
const assert = require('assert')
const { formatFilenameDate } = require('./_filenameDate.js')
const { importDeltaVm } = require('./_deltaVm.js')
const { importIncrementalVm } = require('./_incrementalVm.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
exports.ImportVmBackup = class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
this._adapter = adapter
this._importDeltaVmSettings = { newMacAddresses, mapVdisSrs }
this._importIncrementalVmSettings = { newMacAddresses, mapVdisSrs }
this._metadata = metadata
this._srUuid = srUuid
this._xapi = xapi
@@ -31,11 +31,11 @@ exports.ImportVmBackup = class ImportVmBackup {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(this._importDeltaVmSettings.mapVdisSrs)
Object.entries(this._importIncrementalVmSettings.mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await adapter.readDeltaVmBackup(metadata, ignoredVdis)
backup = await adapter.readIncrementalVmBackup(metadata, ignoredVdis)
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}
@@ -49,8 +49,8 @@ exports.ImportVmBackup = class ImportVmBackup {
const vmRef = isFull
? await xapi.VM_import(backup, srRef)
: await importDeltaVm(backup, await xapi.getRecord('SR', srRef), {
...this._importDeltaVmSettings,
: await importIncrementalVm(backup, await xapi.getRecord('SR', srRef), {
...this._importIncrementalVmSettings,
detectBase: false,
})

View File

@@ -333,7 +333,7 @@ class RemoteAdapter {
const RE_VHDI = /^vhdi(\d+)$/
const handler = this._handler
const diskPath = handler._getFilePath('/' + diskId)
const diskPath = handler.getFilePath('/' + diskId)
const mountDir = yield getTmpDir()
await fromCallback(execFile, 'vhdimount', [diskPath, mountDir])
try {
@@ -404,20 +404,27 @@ class RemoteAdapter {
return `${baseName}.vhd`
}
async listAllVmBackups() {
async listAllVms() {
const handler = this._handler
const backups = { __proto__: null }
await asyncMap(await handler.list(BACKUP_DIR), async entry => {
const vmsUuids = []
await asyncEach(await handler.list(BACKUP_DIR), async entry => {
// ignore hidden and lock files
if (entry[0] !== '.' && !entry.endsWith('.lock')) {
const vmBackups = await this.listVmBackups(entry)
if (vmBackups.length !== 0) {
backups[entry] = vmBackups
}
vmsUuids.push(entry)
}
})
return vmsUuids
}
async listAllVmBackups() {
const vmsUuids = await this.listAllVms()
const backups = { __proto__: null }
await asyncEach(vmsUuids, async vmUuid => {
const vmBackups = await this.listVmBackups(vmUuid)
if (vmBackups.length !== 0) {
backups[vmUuid] = vmBackups
}
})
return backups
}
@@ -691,8 +698,8 @@ class RemoteAdapter {
}
// open the hierarchy of ancestors until we find a full one
async _createSyntheticStream(handler, path) {
const disposableSynthetic = await VhdSynthetic.fromVhdChain(handler, path)
async _createVhdStream(handler, path, { useChain }) {
const disposableSynthetic = useChain ? await VhdSynthetic.fromVhdChain(handler, path) : await openVhd(handler, path)
// I don't want the vhds to be disposed on return
// but only when the stream is done ( or failed )
@@ -717,7 +724,7 @@ class RemoteAdapter {
return stream
}
async readDeltaVmBackup(metadata, ignoredVdis) {
async readIncrementalVmBackup(metadata, ignoredVdis, { useChain = true } = {}) {
const handler = this._handler
const { vbds, vhds, vifs, vm, vmSnapshot } = metadata
const dir = dirname(metadata._filename)
@@ -725,7 +732,7 @@ class RemoteAdapter {
const streams = {}
await asyncMapSettled(Object.keys(vdis), async ref => {
streams[`${ref}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[ref]))
streams[`${ref}.vhd`] = await this._createVhdStream(handler, join(dir, vhds[ref]), { useChain })
})
return {

View File

@@ -1,7 +1,7 @@
'use strict'
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { PATH_DB_DUMP } = require('./_PoolMetadataBackup.js')
const { PATH_DB_DUMP } = require('./_runners/_PoolMetadataBackup.js')
exports.RestoreMetadataBackup = class RestoreMetadataBackup {
constructor({ backupId, handler, xapi }) {

View File

@@ -1,515 +0,0 @@
'use strict'
const assert = require('assert')
const findLast = require('lodash/findLast.js')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const vhdStreamValidator = require('vhd-lib/vhdStreamValidator.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { pipeline } = require('node:stream')
const { DeltaBackupWriter } = require('./writers/DeltaBackupWriter.js')
const { DeltaReplicationWriter } = require('./writers/DeltaReplicationWriter.js')
const { exportDeltaVm } = require('./_deltaVm.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { FullBackupWriter } = require('./writers/FullBackupWriter.js')
const { FullReplicationWriter } = require('./writers/FullReplicationWriter.js')
const { getOldEntries } = require('./_getOldEntries.js')
const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
const { debug, warn } = createLogger('xo:backups:VmBackup')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
const forkDeltaExport = deltaExport =>
Object.create(deltaExport, {
streams: {
value: mapValues(deltaExport.streams, forkStreamUnpipe),
},
})
const noop = Function.prototype
class VmBackup {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
this.vm = vm
const { tags } = this.vm
// VM (snapshot) that is really exported
this.exportedVm = undefined
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isDelta = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._isDelta
? [DeltaBackupWriter, DeltaReplicationWriter]
: [FullBackupWriter, FullReplicationWriter]
const allSettings = job.settings
Object.keys(remoteAdapters).forEach(remoteId => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(new BackupWriter({ backup: this, remoteId, settings: targetSettings }))
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(new ReplicationWriter({ backup: this, sr, settings: targetSettings }))
}
})
}
}
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const { vm } = this
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const { vm } = this
const xapi = this._xapi
const settings = this._settings
const doSnapshot =
settings.unconditionalSnapshot ||
this._isDelta ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
if (doSnapshot) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this.exportedVm = await xapi.getRecord('VM', snapshotRef)
return this.exportedVm.uuid
})
} else {
this.exportedVm = vm
this.timestamp = Date.now()
}
}
async _copyDelta() {
const { exportedVm } = this
const baseVm = this._baseVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportDeltaVm(exportedVm, baseVm, {
fullVdisRequired,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
sizeContainers,
timestamp,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _copyFull() {
const { compression } = this.job
const stream = this._throttleStream(
await this._xapi.VM_export(this.exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this.vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this.vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this.vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const { tags } = this.vm
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
return
}
await this._callWriters(writer => writer.healthCheck(this._healthCheckSr), 'writer.healthCheck()')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
if (this._isDelta) {
await this._selectBaseVm()
}
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const { vm } = this
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
await (this._isDelta ? this._copyDelta() : this._copyFull())
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
exports.VmBackup = VmBackup
decorateMethodsWith(VmBackup, {
run: defer,
})

View File

@@ -13,10 +13,10 @@ const { createDebounceResource } = require('@vates/disposable/debounceResource.j
const { decorateMethodsWith } = require('@vates/decorate-with')
const { deduped } = require('@vates/disposable/deduped.js')
const { getHandler } = require('@xen-orchestra/fs')
const { createRunner } = require('./Backup.js')
const { parseDuration } = require('@vates/parse-duration')
const { Xapi } = require('@xen-orchestra/xapi')
const { Backup } = require('./Backup.js')
const { RemoteAdapter } = require('./RemoteAdapter.js')
const { Task } = require('./Task.js')
@@ -48,7 +48,7 @@ class BackupWorker {
}
run() {
return new Backup({
return createRunner({
config: this.#config,
getAdapter: remoteId => this.getAdapter(this.#remotes[remoteId]),
getConnectedRecord: Disposable.factory(async function* getConnectedRecord(type, uuid) {

View File

@@ -3,7 +3,6 @@
const { beforeEach, afterEach, test, describe } = require('test')
const assert = require('assert').strict
const rimraf = require('rimraf')
const tmp = require('tmp')
const fs = require('fs-extra')
const uuid = require('uuid')
@@ -14,6 +13,7 @@ const { VHDFOOTER, VHDHEADER } = require('./tests.fixtures.js')
const { VhdFile, Constants, VhdDirectory, VhdAbstract } = require('vhd-lib')
const { checkAliases } = require('./_cleanVm')
const { dirname, basename } = require('path')
const { rimraf } = require('rimraf')
let tempDir, adapter, handler, jobId, vdiId, basePath, relativePath
const rootPath = 'xo-vm-backups/VMUUID/'

View File

@@ -33,7 +33,7 @@ const resolveUuid = async (xapi, cache, uuid, type) => {
return ref
}
exports.exportDeltaVm = async function exportDeltaVm(
exports.exportIncrementalVm = async function exportIncrementalVm(
vm,
baseVm,
{
@@ -143,18 +143,18 @@ exports.exportDeltaVm = async function exportDeltaVm(
)
}
exports.importDeltaVm = defer(async function importDeltaVm(
exports.importIncrementalVm = defer(async function importIncrementalVm(
$defer,
deltaVm,
incrementalVm,
sr,
{ cancelToken = CancelToken.none, detectBase = true, mapVdisSrs = {}, newMacAddresses = false } = {}
) {
const { version } = deltaVm
const { version } = incrementalVm
if (compareVersions(version, '1.0.0') < 0) {
throw new Error(`Unsupported delta backup version: ${version}`)
}
const vmRecord = deltaVm.vm
const vmRecord = incrementalVm.vm
const xapi = sr.$xapi
let baseVm
@@ -183,7 +183,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
baseVdis[vbd.VDI] = vbd.$VDI
}
})
const vdiRecords = deltaVm.vdis
const vdiRecords = incrementalVm.vdis
// 0. Create suspend_VDI
let suspendVdi
@@ -240,7 +240,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
await asyncMap(await xapi.getField('VM', vmRef, 'VBDs'), ref => ignoreErrors.call(xapi.call('VBD.destroy', ref)))
// 3. Create VDIs & VBDs.
const vbdRecords = deltaVm.vbds
const vbdRecords = incrementalVm.vbds
const vbds = groupBy(vbdRecords, 'VDI')
const newVdis = {}
await asyncMap(Object.keys(vdiRecords), async vdiRef => {
@@ -309,7 +309,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}
})
const { streams } = deltaVm
const { streams } = incrementalVm
await Promise.all([
// Import VDI contents.
@@ -326,7 +326,7 @@ exports.importDeltaVm = defer(async function importDeltaVm(
}),
// Create VIFs.
asyncMap(Object.values(deltaVm.vifs), vif => {
asyncMap(Object.values(incrementalVm.vifs), vif => {
let network = vif.$network$uuid && xapi.getObjectByUuid(vif.$network$uuid, undefined)
if (network === undefined) {
@@ -358,8 +358,8 @@ exports.importDeltaVm = defer(async function importDeltaVm(
])
await Promise.all([
deltaVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', deltaVm.vm.name_label),
incrementalVm.vm.ha_always_run && xapi.setField('VM', vmRef, 'ha_always_run', true),
xapi.setField('VM', vmRef, 'name_label', incrementalVm.vm.name_label),
])
return vmRef

View File

@@ -0,0 +1,134 @@
'use strict'
const { asyncMap } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
const { XoMetadataBackup } = require('./_XoMetadataBackup.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const DEFAULT_METADATA_SETTINGS = {
retentionPoolMetadata: 0,
retentionXoMetadata: 0,
}
exports.Metadata = class MetadataBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const schedule = this._schedule
const job = this._job
const remoteIds = extractIdsFromSimplePattern(job.remotes)
if (remoteIds.length === 0) {
throw new Error('metadata backup job cannot run without remotes')
}
const config = this._config
const poolIds = extractIdsFromSimplePattern(job.pools)
const isEmptyPools = poolIds.length === 0
const isXoMetadata = job.xoMetadata !== undefined
if (!isXoMetadata && isEmptyPools) {
throw new Error('no metadata mode found')
}
const settings = this._settings
const { retentionPoolMetadata, retentionXoMetadata } = settings
if (
(retentionPoolMetadata === 0 && retentionXoMetadata === 0) ||
(!isXoMetadata && retentionPoolMetadata === 0) ||
(isEmptyPools && retentionXoMetadata === 0)
) {
throw new Error('no retentions corresponding to the metadata modes found')
}
await Disposable.use(
Disposable.all(
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get pool record',
data: { type: 'pool', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(remoteIds.map(id => this._getAdapter(id))),
async (pools, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0) {
return
}
remoteAdapters = getAdaptersByRemote(remoteAdapters)
// remove pools that failed (already handled)
pools = pools.filter(_ => _ !== undefined)
const promises = []
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
id: pool.$id,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
},
},
() =>
new PoolMetadataBackup({
config,
job,
pool,
remoteAdapters,
schedule,
settings,
}).run()
)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
type: 'xo',
},
},
() =>
new XoMetadataBackup({
config,
job,
remoteAdapters,
schedule,
settings,
}).run()
)
)
}
await Promise.all(promises)
}
)
}
}

View File

@@ -0,0 +1,98 @@
'use strict'
const { asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { Task } = require('../Task.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const { FullRemote } = require('./_vmRunners/FullRemote.js')
const { IncrementalRemote } = require('./_vmRunners/IncrementalRemote.js')
const DEFAULT_REMOTE_VM_SETTINGS = {
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
timeout: 0,
validateVhdStreams: false,
vmTimeout: 0,
}
exports.VmsRemote = class RemoteVmsBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_REMOTE_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
() => this._getAdapter(job.sourceRemote),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
Disposable.all(
extractIdsFromSimplePattern(job.remotes).map(id => id !== job.sourceRemote && this._getAdapter(id))
),
async ({ adapter: sourceRemoteAdapter }, healthCheckSr, remoteAdapters) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => !!_)
if (remoteAdapters.length === 0) {
return
}
const vmsUuids = await sourceRemoteAdapter.listAllVms()
Task.info('vms', { vms: vmsUuids })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
const opts = {
baseSettings,
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vmUuid] },
sourceRemoteAdapter,
throttleStream,
vmUuid,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalRemote(opts)
} else if (job.mode === 'full') {
vmBackup = new FullRemote(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented for mirror backup`)
}
return runTask(taskStart, () => vmBackup.run())
}
const { concurrency } = settings
await asyncMapSettled(vmsUuids, !concurrency ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,138 @@
'use strict'
const { asyncMapSettled } = require('@xen-orchestra/async-map')
const Disposable = require('promise-toolbox/Disposable')
const { limitConcurrency } = require('limit-concurrency-decorator')
const { extractIdsFromSimplePattern } = require('../extractIdsFromSimplePattern.js')
const { Task } = require('../Task.js')
const createStreamThrottle = require('./_createStreamThrottle.js')
const { DEFAULT_SETTINGS, Abstract } = require('./_Abstract.js')
const { runTask } = require('./_runTask.js')
const { getAdaptersByRemote } = require('./_getAdaptersByRemote.js')
const { IncrementalXapi } = require('./_vmRunners/IncrementalXapi.js')
const { FullXapi } = require('./_vmRunners/FullXapi.js')
const DEFAULT_XAPI_VM_SETTINGS = {
bypassVdiChainsCheck: false,
checkpointSnapshot: false,
concurrency: 2,
copyRetention: 0,
deleteFirst: false,
exportRetention: 0,
fullInterval: 0,
healthCheckSr: undefined,
healthCheckVmsWithTags: [],
maxExportRate: 0,
maxMergedDeltasPerRun: Infinity,
offlineBackup: false,
offlineSnapshot: false,
snapshotRetention: 0,
timeout: 0,
useNbd: false,
unconditionalSnapshot: false,
validateVhdStreams: false,
vmTimeout: 0,
}
exports.VmsXapi = class VmsXapiBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
Object.assign(baseSettings, DEFAULT_XAPI_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
Object.assign(baseSettings, job.settings[''])
return baseSettings
}
async run() {
const job = this._job
// FIXME: proper SimpleIdPattern handling
const getSnapshotNameLabel = this._getSnapshotNameLabel
const schedule = this._schedule
const settings = this._settings
const throttleStream = createStreamThrottle(settings.maxExportRate)
const config = this._config
await Disposable.use(
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
{
name: 'get SR record',
data: { type: 'SR', id },
},
() => Promise.reject(error)
)
})
)
),
Disposable.all(extractIdsFromSimplePattern(job.remotes).map(id => this._getAdapter(id))),
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
async (srs, remoteAdapters, healthCheckSr) => {
// remove adapters that failed (already handled)
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
// remove srs that failed (already handled)
srs = srs.filter(_ => _ !== undefined)
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
return
}
const vmIds = extractIdsFromSimplePattern(job.vms)
Task.info('vms', { vms: vmIds })
remoteAdapters = getAdaptersByRemote(remoteAdapters)
const allSettings = this._job.settings
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () => {
const opts = {
baseSettings,
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
schedule,
settings: { ...settings, ...allSettings[vm.uuid] },
srs,
throttleStream,
vm,
}
let vmBackup
if (job.mode === 'delta') {
vmBackup = new IncrementalXapi(opts)
} else {
if (job.mode === 'full') {
vmBackup = new FullXapi(opts)
} else {
throw new Error(`Job mode ${job.mode} not implemented`)
}
}
return vmBackup.run()
})
}),
error =>
runTask(taskStart, () => {
throw error
})
)
}
const { concurrency } = settings
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
}
)
}
}

View File

@@ -0,0 +1,51 @@
'use strict'
const Disposable = require('promise-toolbox/Disposable')
const pTimeout = require('promise-toolbox/timeout')
const { compileTemplate } = require('@xen-orchestra/template')
const { runTask } = require('./_runTask.js')
const { RemoteTimeoutError } = require('./_RemoteTimeoutError.js')
exports.DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
}
exports.Abstract = class AbstractRunner {
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
this._config = config
this._getRecord = getConnectedRecord
this._job = job
this._schedule = schedule
this._getSnapshotNameLabel = compileTemplate(config.snapshotNameLabelTpl, {
'{job.name}': job.name,
'{vm.name_label}': vm => vm.name_label,
})
const baseSettings = this._computeBaseSettings(config, job)
this._baseSettings = baseSettings
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
const { getRemoteTimeout } = this._settings
this._getAdapter = async function (remoteId) {
try {
const disposable = await pTimeout.call(getAdapter(remoteId), getRemoteTimeout, new RemoteTimeoutError(remoteId))
return new Disposable(() => disposable.dispose(), {
adapter: disposable.value,
remoteId,
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
},
() => Promise.reject(error)
)
}
}
}
}

View File

@@ -2,10 +2,10 @@
const { asyncMap } = require('@xen-orchestra/async-map')
const { DIR_XO_POOL_METADATA_BACKUPS } = require('./RemoteAdapter.js')
const { DIR_XO_POOL_METADATA_BACKUPS } = require('../RemoteAdapter.js')
const { forkStreamUnpipe } = require('./_forkStreamUnpipe.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
const { formatFilenameDate } = require('../_filenameDate.js')
const { Task } = require('../Task.js')
const PATH_DB_DUMP = '/pool/xmldbdump'
exports.PATH_DB_DUMP = PATH_DB_DUMP

View File

@@ -0,0 +1,8 @@
'use strict'
class RemoteTimeoutError extends Error {
constructor(remoteId) {
super('timeout while getting the remote ' + remoteId)
this.remoteId = remoteId
}
}
exports.RemoteTimeoutError = RemoteTimeoutError

View File

@@ -2,9 +2,9 @@
const { asyncMap } = require('@xen-orchestra/async-map')
const { DIR_XO_CONFIG_BACKUPS } = require('./RemoteAdapter.js')
const { formatFilenameDate } = require('./_filenameDate.js')
const { Task } = require('./Task.js')
const { DIR_XO_CONFIG_BACKUPS } = require('../RemoteAdapter.js')
const { formatFilenameDate } = require('../_filenameDate.js')
const { Task } = require('../Task.js')
exports.XoMetadataBackup = class XoMetadataBackup {
constructor({ config, job, remoteAdapters, schedule, settings }) {

View File

@@ -0,0 +1,9 @@
'use strict'
const getAdaptersByRemote = adapters => {
const adaptersByRemote = {}
adapters.forEach(({ adapter, remoteId }) => {
adaptersByRemote[remoteId] = adapter
})
return adaptersByRemote
}
exports.getAdaptersByRemote = getAdaptersByRemote

View File

@@ -0,0 +1,6 @@
'use strict'
const { Task } = require('../Task.js')
const noop = Function.prototype
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
exports.runTask = runTask

View File

@@ -0,0 +1,53 @@
'use strict'
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { AbstractRemote } = require('./_AbstractRemote')
const { FullRemoteWriter } = require('../_writers/FullRemoteWriter')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe')
const { watchStreamSize } = require('../../_watchStreamSize')
const { Task } = require('../../Task')
class FullRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return FullRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'full')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
const stream = await this._sourceRemoteAdapter.readFullVmBackup(metadata)
const sizeContainer = watchStreamSize(stream)
// @todo shouldn't transfer backup if it will be deleted by retention policy (higher retention on source than destination)
await this._callWriters(
writer =>
writer.run({
stream: forkStreamUnpipe(stream),
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
sizeContainer,
}),
'writer.run()'
)
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
exports.FullRemote = FullRemoteVmBackupRunner
decorateMethodsWith(FullRemoteVmBackupRunner, {
_run: defer,
})

View File

@@ -0,0 +1,65 @@
'use strict'
const { createLogger } = require('@xen-orchestra/log')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe.js')
const { FullRemoteWriter } = require('../_writers/FullRemoteWriter.js')
const { FullXapiWriter } = require('../_writers/FullXapiWriter.js')
const { watchStreamSize } = require('../../_watchStreamSize.js')
const { AbstractXapi } = require('./_AbstractXapi.js')
const { debug } = createLogger('xo:backups:FullXapiVmBackup')
exports.FullXapi = class FullXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [FullRemoteWriter, FullXapiWriter]
}
_mustDoSnapshot() {
const vm = this._vm
const settings = this._settings
return (
settings.unconditionalSnapshot ||
(!settings.offlineBackup && vm.power_state === 'Running') ||
settings.snapshotRetention !== 0
)
}
_selectBaseVm() {}
async _copy() {
const { compression } = this.job
const vm = this._vm
const exportedVm = this._exportedVm
const stream = this._throttleStream(
await this._xapi.VM_export(exportedVm.$ref, {
compress: Boolean(compression) && (compression === 'native' ? 'gzip' : 'zstd'),
useSnapshot: false,
})
)
const sizeContainer = watchStreamSize(stream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.run({
sizeContainer,
stream: forkStreamUnpipe(stream),
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.run()'
)
const { size } = sizeContainer
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
}
}

View File

@@ -0,0 +1,67 @@
'use strict'
const assert = require('node:assert')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { mapValues } = require('lodash')
const { Task } = require('../../Task')
const { AbstractRemote } = require('./_AbstractRemote')
const { IncrementalRemoteWriter } = require('../_writers/IncrementalRemoteWriter')
const { forkDeltaExport } = require('./_forkDeltaExport')
const isVhdDifferencingDisk = require('vhd-lib/isVhdDifferencingDisk')
const { asyncEach } = require('@vates/async-each')
class IncrementalRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {
return IncrementalRemoteWriter
}
async _run($defer) {
const transferList = await this._computeTransferList(({ mode }) => mode === 'delta')
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
if (transferList.length > 0) {
for (const metadata of transferList) {
assert.strictEqual(metadata.mode, 'delta')
await this._callWriters(writer => writer.prepare({ isBase: metadata.isBase }), 'writer.prepare()')
const incrementalExport = await this._sourceRemoteAdapter.readIncrementalVmBackup(metadata, undefined, {
useChain: false,
})
const differentialVhds = {}
await asyncEach(Object.entries(incrementalExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
incrementalExport.streams = mapValues(incrementalExport.streams, this._throttleStream)
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(incrementalExport),
differentialVhds,
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,
}),
'writer.transfer()'
)
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
// for healthcheck
this._tags = metadata.vm.tags
}
} else {
Task.info('No new data to upload for this VM')
}
}
}
exports.IncrementalRemote = IncrementalRemoteVmBackupRunner
decorateMethodsWith(IncrementalRemoteVmBackupRunner, {
_run: defer,
})

View File

@@ -0,0 +1,175 @@
'use strict'
const findLast = require('lodash/findLast.js')
const keyBy = require('lodash/keyBy.js')
const mapValues = require('lodash/mapValues.js')
const vhdStreamValidator = require('vhd-lib/vhdStreamValidator.js')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { pipeline } = require('node:stream')
const { IncrementalRemoteWriter } = require('../_writers/IncrementalRemoteWriter.js')
const { IncrementalXapiWriter } = require('../_writers/IncrementalXapiWriter.js')
const { exportIncrementalVm } = require('../../_incrementalVm.js')
const { Task } = require('../../Task.js')
const { watchStreamSize } = require('../../_watchStreamSize.js')
const { AbstractXapi } = require('./_AbstractXapi.js')
const { forkDeltaExport } = require('./_forkDeltaExport.js')
const isVhdDifferencingDisk = require('vhd-lib/isVhdDifferencingDisk')
const { asyncEach } = require('@vates/async-each')
const { debug } = createLogger('xo:backups:IncrementalXapiVmBackup')
const noop = Function.prototype
exports.IncrementalXapi = class IncrementalXapiVmBackupRunner extends AbstractXapi {
_getWriters() {
return [IncrementalRemoteWriter, IncrementalXapiWriter]
}
_mustDoSnapshot() {
return true
}
async _copy() {
const baseVm = this._baseVm
const vm = this._vm
const exportedVm = this._exportedVm
const fullVdisRequired = this._fullVdisRequired
const isFull = fullVdisRequired === undefined || fullVdisRequired.size !== 0
await this._callWriters(writer => writer.prepare({ isFull }), 'writer.prepare()')
const deltaExport = await exportIncrementalVm(exportedVm, baseVm, {
fullVdisRequired,
})
// since NBD is network based, if one disk use nbd , all the disk use them
// except the suspended VDI
if (Object.values(deltaExport.streams).some(({ _nbd }) => _nbd)) {
Task.info('Transfer data using NBD')
}
const differentialVhds = {}
// since isVhdDifferencingDisk is reading and unshifting data in stream
// it should be done BEFORE any other stream transform
await asyncEach(Object.entries(deltaExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
})
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
if (this._settings.validateVhdStreams) {
deltaExport.streams = mapValues(deltaExport.streams, stream => pipeline(stream, vhdStreamValidator, noop))
}
deltaExport.streams = mapValues(deltaExport.streams, this._throttleStream)
const timestamp = Date.now()
await this._callWriters(
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
differentialVhds,
sizeContainers,
timestamp,
vm,
vmSnapshot: exportedVm,
}),
'writer.transfer()'
)
this._baseVm = exportedVm
if (baseVm !== undefined) {
await exportedVm.update_other_config(
'xo:backup:deltaChainLength',
String(+(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1)
)
}
// not the case if offlineBackup
if (exportedVm.is_a_snapshot) {
await exportedVm.update_other_config('xo:backup:exported', 'true')
}
const size = Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0)
const end = Date.now()
const duration = end - timestamp
debug('transfer complete', {
duration,
speed: duration !== 0 ? (size * 1e3) / 1024 / 1024 / duration : 0,
size,
})
await this._callWriters(writer => writer.cleanup(), 'writer.cleanup()')
}
async _selectBaseVm() {
const xapi = this._xapi
let baseVm = findLast(this._jobSnapshots, _ => 'xo:backup:exported' in _.other_config)
if (baseVm === undefined) {
debug('no base VM found')
return
}
const fullInterval = this._settings.fullInterval
const deltaChainLength = +(baseVm.other_config['xo:backup:deltaChainLength'] ?? 0) + 1
if (!(fullInterval === 0 || fullInterval > deltaChainLength)) {
debug('not using base VM becaust fullInterval reached')
return
}
const srcVdis = keyBy(await xapi.getRecords('VDI', await this._vm.$getDisks()), '$ref')
// resolve full record
baseVm = await xapi.getRecord('VM', baseVm.$ref)
const baseUuidToSrcVdi = new Map()
await asyncMap(await baseVm.$getDisks(), async baseRef => {
const [baseUuid, snapshotOf] = await Promise.all([
xapi.getField('VDI', baseRef, 'uuid'),
xapi.getField('VDI', baseRef, 'snapshot_of'),
])
const srcVdi = srcVdis[snapshotOf]
if (srcVdi !== undefined) {
baseUuidToSrcVdi.set(baseUuid, srcVdi)
} else {
debug('ignore snapshot VDI because no longer present on VM', {
vdi: baseUuid,
})
}
})
const presentBaseVdis = new Map(baseUuidToSrcVdi)
await this._callWriters(
writer => presentBaseVdis.size !== 0 && writer.checkBaseVdis(presentBaseVdis, baseVm),
'writer.checkBaseVdis()',
false
)
if (presentBaseVdis.size === 0) {
debug('no base VM found')
return
}
const fullVdisRequired = new Set()
baseUuidToSrcVdi.forEach((srcVdi, baseUuid) => {
if (presentBaseVdis.has(baseUuid)) {
debug('found base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
} else {
debug('missing base VDI', {
base: baseUuid,
vdi: srcVdi.uuid,
})
fullVdisRequired.add(srcVdi.uuid)
}
})
this._baseVm = baseVm
this._fullVdisRequired = fullVdisRequired
}
}

View File

@@ -0,0 +1,95 @@
'use strict'
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { Task } = require('../../Task.js')
const { debug, warn } = createLogger('xo:backups:AbstractVmRunner')
class AggregateError extends Error {
constructor(errors, message) {
super(message)
this.errors = errors
}
}
const asyncEach = async (iterable, fn, thisArg = iterable) => {
for (const item of iterable) {
await fn.call(thisArg, item)
}
}
exports.Abstract = class AbstractVmBackupRunner {
// calls fn for each function, warns of any errors, and throws only if there are no writers left
async _callWriters(fn, step, parallel = true) {
const writers = this._writers
const n = writers.size
if (n === 0) {
return
}
async function callWriter(writer) {
const { name } = writer.constructor
try {
debug('writer step starting', { step, writer: name })
await fn(writer)
debug('writer step succeeded', { duration: step, writer: name })
} catch (error) {
writers.delete(writer)
warn('writer step failed', { error, step, writer: name })
// these two steps are the only one that are not already in their own sub tasks
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
Task.warning(
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
)
}
throw error
}
}
if (n === 1) {
const [writer] = writers
return callWriter(writer)
}
const errors = []
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
try {
await callWriter(writer)
} catch (error) {
errors.push(error)
}
})
if (writers.size === 0) {
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
}
}
async _healthCheck() {
const settings = this._settings
if (this._healthCheckSr === undefined) {
return
}
// check if current VM has tags
const tags = this._tags
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
// create a task to have an info in the logs and reports
return Task.run(
{
name: 'health check',
},
() => {
Task.info(`This VM doesn't match the health check's tags for this schedule`)
}
)
}
await this._callWriters(writer => writer.healthCheck(), 'writer.healthCheck()')
}
}

View File

@@ -0,0 +1,86 @@
'use strict'
const { Abstract } = require('./_Abstract')
const { getVmBackupDir } = require('../../_getVmBackupDir')
const { asyncEach } = require('@vates/async-each')
const { Disposable } = require('promise-toolbox')
exports.AbstractRemote = class AbstractRemoteVmBackupRunner extends Abstract {
constructor({
config,
job,
healthCheckSr,
remoteAdapters,
schedule,
settings,
sourceRemoteAdapter,
throttleStream,
vmUuid,
}) {
super()
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
this._healthCheckSr = healthCheckSr
this._sourceRemoteAdapter = sourceRemoteAdapter
this._throttleStream = throttleStream
this._vmUuid = vmUuid
const allSettings = job.settings
const writers = new Set()
this._writers = writers
const RemoteWriter = this._getRemoteWriter()
Object.entries(remoteAdapters).forEach(([remoteId, adapter]) => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
writers.add(new RemoteWriter({ adapter, config, healthCheckSr, job, vmUuid, remoteId, settings: targetSettings }))
})
}
async _computeTransferList(predicate) {
const vmBackups = await this._sourceRemoteAdapter.listVmBackups(this._vmUuid, predicate)
const localMetada = new Map()
Object.values(vmBackups).forEach(metadata => {
const timestamp = metadata.timestamp
localMetada.set(timestamp, metadata)
})
const nbRemotes = Object.keys(this.remoteAdapters).length
const remoteMetadatas = {}
await asyncEach(Object.values(this.remoteAdapters), async remoteAdapter => {
const remoteMetadata = await remoteAdapter.listVmBackups(this._vmUuid, predicate)
remoteMetadata.forEach(metadata => {
const timestamp = metadata.timestamp
remoteMetadatas[timestamp] = (remoteMetadatas[timestamp] ?? 0) + 1
})
})
let chain = []
const timestamps = [...localMetada.keys()]
timestamps.sort()
for (const timestamp of timestamps) {
if (remoteMetadatas[timestamp] !== nbRemotes) {
// this backup is not present in all the remote
// should be retransfered if not found later
chain.push(localMetada.get(timestamp))
} else {
// backup is present in local and remote : the chain has already been transferred
chain = []
}
}
return chain
}
async run() {
const handler = this._sourceRemoteAdapter._handler
await Disposable.use(await handler.lock(getVmBackupDir(this._vmUuid)), async () => {
await this._run()
await this._healthCheck()
})
}
}

View File

@@ -0,0 +1,257 @@
'use strict'
const assert = require('assert')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap } = require('@xen-orchestra/async-map')
const { decorateMethodsWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { Abstract } = require('./_Abstract.js')
class AbstractXapiVmBackupRunner extends Abstract {
constructor({
config,
getSnapshotNameLabel,
healthCheckSr,
job,
remoteAdapters,
remotes,
schedule,
settings,
srs,
throttleStream,
vm,
}) {
super()
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
// don't match replicated VMs created by this very job otherwise they
// will be replicated again and again
throw new Error('cannot backup a VM created by this very job')
}
this.config = config
this.job = job
this.remoteAdapters = remoteAdapters
this.scheduleId = schedule.id
this.timestamp = undefined
// VM currently backed up
const tags = (this._tags = vm.tags)
// VM (snapshot) that is really exported
this._exportedVm = undefined
this._vm = vm
this._fullVdisRequired = undefined
this._getSnapshotNameLabel = getSnapshotNameLabel
this._isIncremental = job.mode === 'delta'
this._healthCheckSr = healthCheckSr
this._jobId = job.id
this._jobSnapshots = undefined
this._throttleStream = throttleStream
this._xapi = vm.$xapi
// Base VM for the export
this._baseVm = undefined
// Settings for this specific run (job, schedule, VM)
if (tags.includes('xo-memory-backup')) {
settings.checkpointSnapshot = true
}
if (tags.includes('xo-offline-backup')) {
settings.offlineSnapshot = true
}
this._settings = settings
// Create writers
{
const writers = new Set()
this._writers = writers
const [BackupWriter, ReplicationWriter] = this._getWriters()
const allSettings = job.settings
Object.entries(remoteAdapters).forEach(([remoteId, adapter]) => {
const targetSettings = {
...settings,
...allSettings[remoteId],
}
if (targetSettings.exportRetention !== 0) {
writers.add(new BackupWriter({ adapter, config, healthCheckSr, job, vmUuid: vm.uuid, remoteId, settings: targetSettings }))
}
})
srs.forEach(sr => {
const targetSettings = {
...settings,
...allSettings[sr.uuid],
}
if (targetSettings.copyRetention !== 0) {
writers.add(new ReplicationWriter({ config, healthCheckSr, job, vmUuid: vm.uuid, sr, settings: targetSettings}))
}
})
}
}
// ensure the VM itself does not have any backup metadata which would be
// copied on manual snapshots and interfere with the backup jobs
async _cleanMetadata() {
const vm = this._vm
if ('xo:backup:job' in vm.other_config) {
await vm.update_other_config({
'xo:backup:datetime': null,
'xo:backup:deltaChainLength': null,
'xo:backup:exported': null,
'xo:backup:job': null,
'xo:backup:schedule': null,
'xo:backup:vm': null,
})
}
}
async _snapshot() {
const vm = this._vm
const xapi = this._xapi
const settings = this._settings
if (this._mustDoSnapshot()) {
await Task.run({ name: 'snapshot' }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
ignoreNobakVdis: true,
name_label: this._getSnapshotNameLabel(vm),
unplugVusbs: true,
})
this.timestamp = Date.now()
await xapi.setFieldEntries('VM', snapshotRef, 'other_config', {
'xo:backup:datetime': formatDateTime(this.timestamp),
'xo:backup:job': this._jobId,
'xo:backup:schedule': this.scheduleId,
'xo:backup:vm': vm.uuid,
})
this._exportedVm = await xapi.getRecord('VM', snapshotRef)
return this._exportedVm.uuid
})
} else {
this._exportedVm = vm
this.timestamp = Date.now()
}
}
async _fetchJobSnapshots() {
const jobId = this._jobId
const vmRef = this._vm.$ref
const xapi = this._xapi
const snapshotsRef = await xapi.getField('VM', vmRef, 'snapshots')
const snapshotsOtherConfig = await asyncMap(snapshotsRef, ref => xapi.getField('VM', ref, 'other_config'))
const snapshots = []
snapshotsOtherConfig.forEach((other_config, i) => {
if (other_config['xo:backup:job'] === jobId) {
snapshots.push({ other_config, $ref: snapshotsRef[i] })
}
})
snapshots.sort((a, b) => (a.other_config['xo:backup:datetime'] < b.other_config['xo:backup:datetime'] ? -1 : 1))
this._jobSnapshots = snapshots
}
async _removeUnusedSnapshots() {
const allSettings = this.job.settings
const baseSettings = this._baseSettings
const baseVmRef = this._baseVm?.$ref
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
const xapi = this._xapi
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
const settings = {
...baseSettings,
...allSettings[scheduleId],
...allSettings[this._vm.uuid],
}
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
if ($ref !== baseVmRef) {
return xapi.VM_destroy($ref)
}
})
})
}
async copy() {
throw new Error('Not implemented')
}
_getWriters() {
throw new Error('Not implemented')
}
_mustDoSnapshot() {
throw new Error('Not implemented')
}
async _selectBaseVm() {
throw new Error('Not implemented')
}
async run($defer) {
const settings = this._settings
assert(
!settings.offlineBackup || settings.snapshotRetention === 0,
'offlineBackup is not compatible with snapshotRetention'
)
await this._callWriters(async writer => {
await writer.beforeBackup()
$defer(async () => {
await writer.afterBackup()
})
}, 'writer.beforeBackup()')
await this._fetchJobSnapshots()
await this._selectBaseVm()
await this._cleanMetadata()
await this._removeUnusedSnapshots()
const vm = this._vm
const isRunning = vm.power_state === 'Running'
const startAfter = isRunning && (settings.offlineBackup ? 'backup' : settings.offlineSnapshot && 'snapshot')
if (startAfter) {
await vm.$callAsync('clean_shutdown')
}
try {
await this._snapshot()
if (startAfter === 'snapshot') {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
if (this._writers.size !== 0) {
await this._copy()
}
} finally {
if (startAfter) {
ignoreErrors.call(vm.$callAsync('start', false, false))
}
await this._fetchJobSnapshots()
await this._removeUnusedSnapshots()
}
await this._healthCheck()
}
}
exports.AbstractXapi = AbstractXapiVmBackupRunner
decorateMethodsWith(AbstractXapiVmBackupRunner, {
run: defer,
})

View File

@@ -0,0 +1,12 @@
'use strict'
const { mapValues } = require('lodash')
const { forkStreamUnpipe } = require('../_forkStreamUnpipe')
exports.forkDeltaExport = function forkDeltaExport(deltaExport) {
return Object.create(deltaExport, {
streams: {
value: mapValues(deltaExport.streams, forkStreamUnpipe),
},
})
}

View File

@@ -1,13 +1,13 @@
'use strict'
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { MixinRemoteWriter } = require('./_MixinRemoteWriter.js')
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(AbstractFullWriter) {
exports.FullRemoteWriter = class FullRemoteWriter extends MixinRemoteWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -26,15 +26,17 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
)
}
async _run({ timestamp, sizeContainer, stream }) {
const backup = this._backup
async _run({ timestamp, sizeContainer, stream, vm, vmSnapshot }) {
const settings = this._settings
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const adapter = this._adapter
// TODO: clean VM backup directory
let metadata = await this._isAlreadyTransferred(timestamp)
if (metadata !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const oldBackups = getOldEntries(
settings.exportRetention - 1,
@@ -47,14 +49,14 @@ exports.FullBackupWriter = class FullBackupWriter extends MixinBackupWriter(Abst
const dataBasename = basename + '.xva'
const dataFilename = this._vmBackupDir + '/' + dataBasename
const metadata = {
metadata = {
jobId: job.id,
mode: job.mode,
scheduleId,
timestamp,
version: '2.0.0',
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
xva: './' + dataBasename,
}

View File

@@ -4,15 +4,15 @@ const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { AbstractFullWriter } = require('./_AbstractFullWriter.js')
const { MixinReplicationWriter } = require('./_MixinReplicationWriter.js')
const { MixinXapiWriter } = require('./_MixinXapiWriter.js')
const { listReplicatedVms } = require('./_listReplicatedVms.js')
exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplicationWriter(AbstractFullWriter) {
exports.FullXapiWriter = class FullXapiWriter extends MixinXapiWriter(AbstractFullWriter) {
constructor(props) {
super(props)
@@ -32,10 +32,11 @@ exports.FullReplicationWriter = class FullReplicationWriter extends MixinReplica
)
}
async _run({ timestamp, sizeContainer, stream }) {
async _run({ timestamp, sizeContainer, stream, vm }) {
const sr = this._sr
const settings = this._settings
const { job, scheduleId, vm } = this._backup
const job = this._job
const scheduleId = this.scheduleId
const { uuid: srUuid, $xapi: xapi } = sr

View File

@@ -11,25 +11,24 @@ const { decorateClass } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { dirname } = require('path')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { Task } = require('../../Task.js')
const { MixinBackupWriter } = require('./_MixinBackupWriter.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { MixinRemoteWriter } = require('./_MixinRemoteWriter.js')
const { AbstractIncrementalWriter } = require('./_AbstractIncrementalWriter.js')
const { checkVhd } = require('./_checkVhd.js')
const { packUuid } = require('./_packUuid.js')
const { Disposable } = require('promise-toolbox')
const { warn } = createLogger('xo:backups:DeltaBackupWriter')
class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi) {
const { handler } = this._adapter
const backup = this._backup
const adapter = this._adapter
const vdisDir = `${this._vmBackupDir}/vdis/${backup.job.id}`
const vdisDir = `${this._vmBackupDir}/vdis/${this._job.id}`
await asyncMap(baseUuidToSrcVdi, async ([baseUuid, srcVdi]) => {
let found = false
@@ -91,11 +90,12 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
async _prepare() {
const adapter = this._adapter
const settings = this._settings
const { scheduleId, vm } = this._backup
const scheduleId = this._scheduleId
const vmUuid = this._vmUuid
const oldEntries = getOldEntries(
settings.exportRetention - 1,
await adapter.listVmBackups(vm.uuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
await adapter.listVmBackups(vmUuid, _ => _.mode === 'delta' && _.scheduleId === scheduleId)
)
this._oldEntries = oldEntries
@@ -134,16 +134,19 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}
}
async _transfer($defer, { timestamp, deltaExport }) {
async _transfer($defer, { differentialVhds, timestamp, deltaExport, vm, vmSnapshot }) {
const adapter = this._adapter
const backup = this._backup
const { job, scheduleId, vm } = backup
const job = this._job
const scheduleId = this._scheduleId
const jobId = job.id
const handler = adapter.handler
// TODO: clean VM backup directory
let metadataContent = await this._isAlreadyTransferred(timestamp)
if (metadataContent !== undefined) {
// @todo : should skip backup while being vigilant to not stuck the forked stream
Task.info('This backup has already been transfered')
}
const basename = formatFilenameDate(timestamp)
const vhds = mapValues(
@@ -158,7 +161,7 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
}/${adapter.getVhdFileName(basename)}`
)
const metadataContent = {
metadataContent = {
jobId,
mode: job.mode,
scheduleId,
@@ -169,16 +172,15 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
vifs: deltaExport.vifs,
vhds,
vm,
vmSnapshot: this._backup.exportedVm,
vmSnapshot,
}
const { size } = await Task.run({ name: 'transfer' }, async () => {
let transferSize = 0
await Promise.all(
map(deltaExport.vdis, async (vdi, id) => {
const path = `${this._vmBackupDir}/${vhds[id]}`
const isDelta = vdi.other_config['xo:base_delta'] !== undefined
const isDelta = differentialVhds[`${id}.vhd`]
let parentPath
if (isDelta) {
const vdiDir = dirname(path)
@@ -191,7 +193,11 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
.sort()
.pop()
assert.notStrictEqual(parentPath, undefined, `missing parent of ${id}`)
assert.notStrictEqual(
parentPath,
undefined,
`missing parent of ${id} in ${dirname(path)}, looking for ${vdi.other_config['xo:base_delta']}`
)
parentPath = parentPath.slice(1) // remove leading slash
@@ -204,7 +210,7 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// merges and chainings
checksum: false,
validator: tmpPath => checkVhd(handler, tmpPath),
writeBlockConcurrency: this._backup.config.writeBlockConcurrency,
writeBlockConcurrency: this._config.writeBlockConcurrency,
})
if (isDelta) {
@@ -227,6 +233,6 @@ class DeltaBackupWriter extends MixinBackupWriter(AbstractDeltaWriter) {
// TODO: run cleanup?
}
}
exports.DeltaBackupWriter = decorateClass(DeltaBackupWriter, {
exports.IncrementalRemoteWriter = decorateClass(IncrementalRemoteWriter, {
_transfer: defer,
})

View File

@@ -4,19 +4,19 @@ const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { formatDateTime } = require('@xen-orchestra/xapi')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getOldEntries } = require('../_getOldEntries.js')
const { importDeltaVm, TAG_COPY_SRC } = require('../_deltaVm.js')
const { Task } = require('../Task.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getOldEntries } = require('../../_getOldEntries.js')
const { importIncrementalVm, TAG_COPY_SRC } = require('../../_incrementalVm.js')
const { Task } = require('../../Task.js')
const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
const { MixinReplicationWriter } = require('./_MixinReplicationWriter.js')
const { AbstractIncrementalWriter } = require('./_AbstractIncrementalWriter.js')
const { MixinXapiWriter } = require('./_MixinXapiWriter.js')
const { listReplicatedVms } = require('./_listReplicatedVms.js')
exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinReplicationWriter(AbstractDeltaWriter) {
exports.IncrementalXapiWriter = class IncrementalXapiWriter extends MixinXapiWriter(AbstractIncrementalWriter) {
async checkBaseVdis(baseUuidToSrcVdi, baseVm) {
const sr = this._sr
const replicatedVm = listReplicatedVms(sr.$xapi, this._backup.job.id, sr.uuid, this._backup.vm.uuid).find(
const replicatedVm = listReplicatedVms(sr.$xapi, this._job.id, sr.uuid, this._vmUuid).find(
vm => vm.other_config[TAG_COPY_SRC] === baseVm.uuid
)
if (replicatedVm === undefined) {
@@ -49,8 +49,10 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
type: 'SR',
},
})
const hasHealthCheckSr = this._healthCheckSr !== undefined
this.transfer = task.wrapFn(this.transfer)
this.cleanup = task.wrapFn(this.cleanup, true)
this.cleanup = task.wrapFn(this.cleanup, !hasHealthCheckSr)
this.healthCheck = task.wrapFn(this.healthCheck, hasHealthCheckSr)
return task.run(() => this._prepare())
}
@@ -58,12 +60,13 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
async _prepare() {
const settings = this._settings
const { uuid: srUuid, $xapi: xapi } = this._sr
const { scheduleId, vm } = this._backup
const vmUuid = this._vmUuid
const scheduleId = this._scheduleId
// delete previous interrupted copies
ignoreErrors.call(asyncMapSettled(listReplicatedVms(xapi, scheduleId, undefined, vm.uuid), vm => vm.$destroy))
ignoreErrors.call(asyncMapSettled(listReplicatedVms(xapi, scheduleId, undefined, vmUuid), vm => vm.$destroy))
this._oldEntries = getOldEntries(settings.copyRetention - 1, listReplicatedVms(xapi, scheduleId, srUuid, vm.uuid))
this._oldEntries = getOldEntries(settings.copyRetention - 1, listReplicatedVms(xapi, scheduleId, srUuid, vmUuid))
if (settings.deleteFirst) {
await this._deleteOldEntries()
@@ -80,16 +83,17 @@ exports.DeltaReplicationWriter = class DeltaReplicationWriter extends MixinRepli
return asyncMapSettled(this._oldEntries, vm => vm.$destroy())
}
async _transfer({ timestamp, deltaExport, sizeContainers }) {
async _transfer({ timestamp, deltaExport, sizeContainers, vm }) {
const { _warmMigration } = this._settings
const sr = this._sr
const { job, scheduleId, vm } = this._backup
const job = this._job
const scheduleId = this._scheduleId
const { uuid: srUuid, $xapi: xapi } = sr
let targetVmRef
await Task.run({ name: 'transfer' }, async () => {
targetVmRef = await importDeltaVm(
targetVmRef = await importIncrementalVm(
{
__proto__: deltaExport,
vm: {

View File

@@ -3,9 +3,9 @@
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractFullWriter = class AbstractFullWriter extends AbstractWriter {
async run({ timestamp, sizeContainer, stream }) {
async run({ timestamp, sizeContainer, stream, vm, vmSnapshot }) {
try {
return await this._run({ timestamp, sizeContainer, stream })
return await this._run({ timestamp, sizeContainer, stream, vm, vmSnapshot })
} finally {
// ensure stream is properly closed
stream.destroy()

View File

@@ -2,7 +2,7 @@
const { AbstractWriter } = require('./_AbstractWriter.js')
exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
exports.AbstractIncrementalWriter = class AbstractIncrementalWriter extends AbstractWriter {
checkBaseVdis(baseUuidToSrcVdi, baseVm) {
throw new Error('Not implemented')
}
@@ -15,9 +15,9 @@ exports.AbstractDeltaWriter = class AbstractDeltaWriter extends AbstractWriter {
throw new Error('Not implemented')
}
async transfer({ timestamp, deltaExport, sizeContainers }) {
async transfer({ deltaExport, ...other }) {
try {
return await this._transfer({ timestamp, deltaExport, sizeContainers })
return await this._transfer({ deltaExport, ...other })
} finally {
// ensure all streams are properly closed
for (const stream of Object.values(deltaExport.streams)) {

View File

@@ -0,0 +1,31 @@
'use strict'
const { formatFilenameDate } = require('../../_filenameDate')
const { getVmBackupDir } = require('../../_getVmBackupDir')
exports.AbstractWriter = class AbstractWriter {
constructor({ config, healthCheckSr, job, vmUuid, scheduleId, settings }) {
this._config = config
this._healthCheckSr = healthCheckSr
this._job = job
this._scheduleId = scheduleId
this._settings = settings
this._vmUuid = vmUuid
}
beforeBackup() {}
afterBackup() {}
healthCheck(sr) {}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`))
return actualMetadata
} catch (error) {}
}
}

View File

@@ -4,26 +4,26 @@ const { createLogger } = require('@xen-orchestra/log')
const { join } = require('path')
const assert = require('assert')
const { formatFilenameDate } = require('../_filenameDate.js')
const { getVmBackupDir } = require('../_getVmBackupDir.js')
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup.js')
const { ImportVmBackup } = require('../ImportVmBackup.js')
const { Task } = require('../Task.js')
const MergeWorker = require('../merge-worker/index.js')
const { formatFilenameDate } = require('../../_filenameDate.js')
const { getVmBackupDir } = require('../../_getVmBackupDir.js')
const { HealthCheckVmBackup } = require('../../HealthCheckVmBackup.js')
const { ImportVmBackup } = require('../../ImportVmBackup.js')
const { Task } = require('../../Task.js')
const MergeWorker = require('../../merge-worker/index.js')
const { info, warn } = createLogger('xo:backups:MixinBackupWriter')
exports.MixinBackupWriter = (BaseClass = Object) =>
class MixinBackupWriter extends BaseClass {
exports.MixinRemoteWriter = (BaseClass = Object) =>
class MixinRemoteWriter extends BaseClass {
#lock
constructor({ remoteId, ...rest }) {
constructor({ remoteId, adapter, ...rest }) {
super(rest)
this._adapter = rest.backup.remoteAdapters[remoteId]
this._adapter = adapter
this._remoteId = remoteId
this._vmBackupDir = getVmBackupDir(this._backup.vm.uuid)
this._vmBackupDir = getVmBackupDir(rest.vmUuid)
}
async _cleanVm(options) {
@@ -38,7 +38,7 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
Task.warning(message, data)
},
lock: false,
mergeBlockConcurrency: this._backup.config.mergeBlockConcurrency,
mergeBlockConcurrency: this._config.mergeBlockConcurrency,
})
})
} catch (error) {
@@ -55,10 +55,10 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
async afterBackup() {
const { disableMergeWorker } = this._backup.config
const { disableMergeWorker } = this._config
// merge worker only compatible with local remotes
const { handler } = this._adapter
const willMergeInWorker = !disableMergeWorker && typeof handler._getRealPath === 'function'
const willMergeInWorker = !disableMergeWorker && typeof handler.getRealPath === 'function'
const { merge } = await this._cleanVm({ remove: true, merge: !willMergeInWorker })
await this.#lock.dispose()
@@ -71,16 +71,18 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
Math.random().toString(36).slice(2)
await handler.outputFile(taskFile, this._backup.vm.uuid)
const remotePath = handler._getRealPath()
const remotePath = handler.getRealPath()
await MergeWorker.run(remotePath)
}
}
healthCheck(sr) {
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notStrictEqual(
this._metadataFileName,
undefined,
'Metadata file name should be defined before making a healthcheck'
'Metadata file name should be defined before making a health check'
)
return Task.run(
{
@@ -109,4 +111,16 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
}
)
}
_isAlreadyTransferred(timestamp) {
const vmUuid = this._vmUuid
const adapter = this._adapter
const backupDir = getVmBackupDir(vmUuid)
try {
const actualMetadata = JSON.parse(
adapter._handler.readFile(`${backupDir}/${formatFilenameDate(timestamp)}.json`)
)
return actualMetadata
} catch (error) {}
}
}

View File

@@ -1,26 +1,22 @@
'use strict'
const { Task } = require('../Task')
const assert = require('node:assert/strict')
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup')
const { extractOpaqueRef } = require('@xen-orchestra/xapi')
function extractOpaqueRef(str) {
const OPAQUE_REF_RE = /OpaqueRef:[0-9a-z-]+/
const matches = OPAQUE_REF_RE.exec(str)
if (!matches) {
throw new Error('no opaque ref found')
}
return matches[0]
}
exports.MixinReplicationWriter = (BaseClass = Object) =>
class MixinReplicationWriter extends BaseClass {
const { Task } = require('../../Task')
const assert = require('node:assert/strict')
const { HealthCheckVmBackup } = require('../../HealthCheckVmBackup')
exports.MixinXapiWriter = (BaseClass = Object) =>
class MixinXapiWriter extends BaseClass {
constructor({ sr, ...rest }) {
super(rest)
this._sr = sr
}
healthCheck(sr) {
healthCheck() {
const sr = this._healthCheckSr
assert.notStrictEqual(sr, undefined, 'SR should be defined before making a health check')
assert.notEqual(this._targetVmRef, undefined, 'A vm should have been transfered to be health checked')
// copy VM
return Task.run(

View File

@@ -228,7 +228,7 @@ Settings are described in [`@xen-orchestra/backups/Backup.js](https://github.com
- `prepare({ isFull })`
- `transfer({ timestamp, deltaExport, sizeContainers })`
- `cleanup()`
- `healthCheck(sr)`
- `healthCheck()` // is not executed if no health check sr or tag doesn't match
- **Full**
- `run({ timestamp, sizeContainer, stream })`
- `afterBackup()`

View File

@@ -0,0 +1,35 @@
#!/bin/sh
# This script must be executed at the start of the machine.
#
# It must run as root to be able to use xenstore-read and xenstore-write
# fail in case of error or undefined variable
set -eu
# stop there if a health check is not in progress
if [ "$(xenstore-read vm-data/xo-backup-health-check 2>&1)" != planned ]
then
exit
fi
# not necessary, but informs XO that this script has started which helps diagnose issues
xenstore-write vm-data/xo-backup-health-check running
# put your test here
#
# in this example, the command `sqlite3` is used to validate the health of a database
# and its output is captured and passed to XO via the XenStore in case of error
if output=$(sqlite3 ~/my-database.sqlite3 .table 2>&1)
then
# inform XO everything is ok
xenstore-write vm-data/xo-backup-health-check success
else
# inform XO there is an issue
xenstore-write vm-data/xo-backup-health-check failure
# more info about the issue can be written to `vm-data/health-check-error`
#
# it will be shown in XO
xenstore-write vm-data/xo-backup-health-check-error "$output"
fi

View File

@@ -8,13 +8,13 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "0.35.0",
"version": "0.38.0",
"engines": {
"node": ">=14.6"
},
"scripts": {
"postversion": "npm publish --access public",
"test": "node--test"
"test-integration": "node--test *.integ.js"
},
"dependencies": {
"@kldzj/stream-throttle": "^1.1.1",
@@ -27,7 +27,7 @@
"@vates/nbd-client": "^1.2.0",
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^3.3.4",
"@xen-orchestra/fs": "^4.0.0",
"@xen-orchestra/log": "^0.6.0",
"@xen-orchestra/template": "^0.1.0",
"compare-versions": "^5.0.1",
@@ -42,17 +42,17 @@
"promise-toolbox": "^0.21.0",
"proper-lockfile": "^4.1.2",
"uuid": "^9.0.0",
"vhd-lib": "^4.4.0",
"vhd-lib": "^4.5.0",
"yazl": "^2.5.1"
},
"devDependencies": {
"rimraf": "^4.1.1",
"rimraf": "^5.0.1",
"sinon": "^15.0.1",
"test": "^3.2.1",
"tmp": "^0.2.1"
},
"peerDependencies": {
"@xen-orchestra/xapi": "^2.2.0"
"@xen-orchestra/xapi": "^2.2.1"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -1,14 +0,0 @@
'use strict'
exports.AbstractWriter = class AbstractWriter {
constructor({ backup, settings }) {
this._backup = backup
this._settings = settings
}
beforeBackup() {}
afterBackup() {}
healthCheck(sr) {}
}

View File

@@ -18,7 +18,7 @@
"preferGlobal": true,
"dependencies": {
"golike-defer": "^0.5.1",
"xen-api": "^1.3.0"
"xen-api": "^1.3.1"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/fs",
"version": "3.3.4",
"version": "4.0.0",
"license": "AGPL-3.0-or-later",
"description": "The File System for Xen Orchestra backups.",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",
@@ -53,7 +53,9 @@
"@babel/preset-env": "^7.8.0",
"cross-env": "^7.0.2",
"dotenv": "^16.0.0",
"rimraf": "^4.1.1",
"rimraf": "^5.0.1",
"sinon": "^15.0.4",
"test": "^3.3.0",
"tmp": "^0.2.1"
},
"scripts": {
@@ -63,7 +65,9 @@
"prebuild": "yarn run clean",
"predev": "yarn run clean",
"prepublishOnly": "yarn run build",
"postversion": "npm publish"
"pretest": "yarn run build",
"postversion": "npm publish",
"test": "node--test ./dist/"
},
"author": {
"name": "Vates SAS",

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'readable-stream'
import copyStreamToBuffer from './_copyStreamToBuffer.js'
@@ -16,6 +17,6 @@ describe('copyStreamToBuffer', () => {
await copyStreamToBuffer(stream, buffer)
expect(buffer.toString()).toBe('hel')
assert.equal(buffer.toString(), 'hel')
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'readable-stream'
import createBufferFromStream from './_createBufferFromStream.js'
@@ -14,6 +15,6 @@ describe('createBufferFromStream', () => {
const buffer = await createBufferFromStream(stream)
expect(buffer.toString()).toBe('hello')
assert.equal(buffer.toString(), 'hello')
})
})

View File

@@ -1,4 +1,6 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import { Readable } from 'node:stream'
import { _getEncryptor } from './_encryptor'
import crypto from 'crypto'
@@ -25,13 +27,13 @@ algorithms.forEach(algorithm => {
it('handle buffer', () => {
const encrypted = encryptor.encryptData(buffer)
if (algorithm !== 'none') {
expect(encrypted.equals(buffer)).toEqual(false) // encrypted should be different
assert.equal(encrypted.equals(buffer), false) // encrypted should be different
// ivlength, auth tag, padding
expect(encrypted.length).not.toEqual(buffer.length)
assert.notEqual(encrypted.length, buffer.length)
}
const decrypted = encryptor.decryptData(encrypted)
expect(decrypted.equals(buffer)).toEqual(true)
assert.equal(decrypted.equals(buffer), true)
})
it('handle stream', async () => {
@@ -39,12 +41,12 @@ algorithms.forEach(algorithm => {
stream.length = buffer.length
const encrypted = encryptor.encryptStream(stream)
if (algorithm !== 'none') {
expect(encrypted.length).toEqual(undefined)
assert.equal(encrypted.length, undefined)
}
const decrypted = encryptor.decryptStream(encrypted)
const decryptedBuffer = await streamToBuffer(decrypted)
expect(decryptedBuffer.equals(buffer)).toEqual(true)
assert.equal(decryptedBuffer.equals(buffer), true)
})
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { describe, it } from 'test'
import { strict as assert } from 'assert'
import guessAwsRegion from './_guessAwsRegion.js'
@@ -6,12 +7,12 @@ describe('guessAwsRegion', () => {
it('should return region from AWS URL', async () => {
const region = guessAwsRegion('s3.test-region.amazonaws.com')
expect(region).toBe('test-region')
assert.equal(region, 'test-region')
})
it('should return default region if none is found is AWS URL', async () => {
const region = guessAwsRegion('s3.amazonaws.com')
expect(region).toBe('us-east-1')
assert.equal(region, 'us-east-1')
})
})

View File

@@ -9,28 +9,32 @@ import LocalHandler from './local'
const sudoExeca = (command, args, opts) => execa('sudo', [command, ...args], opts)
export default class MountHandler extends LocalHandler {
#execa
#keeper
#params
#realPath
constructor(remote, { mountsDir = join(tmpdir(), 'xo-fs-mounts'), useSudo = false, ...opts } = {}, params) {
super(remote, opts)
this._execa = useSudo ? sudoExeca : execa
this._keeper = undefined
this._params = {
this.#execa = useSudo ? sudoExeca : execa
this.#params = {
...params,
options: [params.options, remote.options ?? params.defaultOptions].filter(_ => _ !== undefined).join(','),
}
this._realPath = join(mountsDir, remote.id || Math.random().toString(36).slice(2))
this.#realPath = join(mountsDir, remote.id || Math.random().toString(36).slice(2))
}
async _forget() {
const keeper = this._keeper
const keeper = this.#keeper
if (keeper === undefined) {
return
}
this._keeper = undefined
this.#keeper = undefined
await fs.close(keeper)
await ignoreErrors.call(
this._execa('umount', [this._getRealPath()], {
this.#execa('umount', [this.getRealPath()], {
env: {
LANG: 'C',
},
@@ -38,30 +42,30 @@ export default class MountHandler extends LocalHandler {
)
}
_getRealPath() {
return this._realPath
getRealPath() {
return this.#realPath
}
async _sync() {
// in case of multiple `sync`s, ensure we properly close previous keeper
{
const keeper = this._keeper
const keeper = this.#keeper
if (keeper !== undefined) {
this._keeper = undefined
this.#keeper = undefined
ignoreErrors.call(fs.close(keeper))
}
}
const realPath = this._getRealPath()
const realPath = this.getRealPath()
await fs.ensureDir(realPath)
try {
const { type, device, options, env } = this._params
const { type, device, options, env } = this.#params
// Linux mount is more flexible in which order the mount arguments appear.
// But FreeBSD requires this order of the arguments.
await this._execa('mount', ['-o', options, '-t', type, device, realPath], {
await this.#execa('mount', ['-o', options, '-t', type, device, realPath], {
env: {
LANG: 'C',
...env,
@@ -71,7 +75,7 @@ export default class MountHandler extends LocalHandler {
try {
// the failure may mean it's already mounted, use `findmnt` to check
// that's the case
await this._execa('findmnt', [realPath], {
await this.#execa('findmnt', [realPath], {
stdio: 'ignore',
})
} catch (_) {
@@ -82,7 +86,7 @@ export default class MountHandler extends LocalHandler {
// keep an open file on the mount to prevent it from being unmounted if used
// by another handler/process
const keeperPath = `${realPath}/.keeper_${Math.random().toString(36).slice(2)}`
this._keeper = await fs.open(keeperPath, 'w')
this.#keeper = await fs.open(keeperPath, 'w')
ignoreErrors.call(fs.unlink(keeperPath))
}
}

View File

@@ -37,8 +37,13 @@ const ignoreEnoent = error => {
const noop = Function.prototype
class PrefixWrapper {
#prefix
constructor(handler, prefix) {
this._prefix = prefix
this.#prefix = prefix
// cannot be a private field because used by methods dynamically added
// outside of the class
this._handler = handler
}
@@ -50,7 +55,7 @@ class PrefixWrapper {
async list(dir, opts) {
const entries = await this._handler.list(this._resolve(dir), opts)
if (opts != null && opts.prependDir) {
const n = this._prefix.length
const n = this.#prefix.length
entries.forEach((entry, i, entries) => {
entries[i] = entry.slice(n)
})
@@ -62,19 +67,21 @@ class PrefixWrapper {
return this._handler.rename(this._resolve(oldPath), this._resolve(newPath))
}
// cannot be a private method because used by methods dynamically added
// outside of the class
_resolve(path) {
return this._prefix + normalizePath(path)
return this.#prefix + normalizePath(path)
}
}
export default class RemoteHandlerAbstract {
#encryptor
#rawEncryptor
get _encryptor() {
if (this.#encryptor === undefined) {
get #encryptor() {
if (this.#rawEncryptor === undefined) {
throw new Error(`Can't access to encryptor before remote synchronization`)
}
return this.#encryptor
return this.#rawEncryptor
}
constructor(remote, options = {}) {
@@ -111,6 +118,10 @@ export default class RemoteHandlerAbstract {
}
// Public members
//
// Should not be called directly because:
// - some concurrency limits may be applied which may lead to deadlocks
// - some preprocessing may be applied on parameters that should not be done multiple times (e.g. prefixing paths)
get type() {
throw new Error('Not implemented')
@@ -121,10 +132,6 @@ export default class RemoteHandlerAbstract {
return prefix === '/' ? this : new PrefixWrapper(this, prefix)
}
async closeFile(fd) {
await this.__closeFile(fd)
}
async createReadStream(file, { checksum = false, ignoreMissingChecksum = false, ...options } = {}) {
if (options.end !== undefined || options.start !== undefined) {
assert.strictEqual(this.isEncrypted, false, `Can't read part of a file when encryption is active ${file}`)
@@ -157,7 +164,7 @@ export default class RemoteHandlerAbstract {
}
if (this.isEncrypted) {
stream = this._encryptor.decryptStream(stream)
stream = this.#encryptor.decryptStream(stream)
} else {
// try to add the length prop if missing and not a range stream
if (stream.length === undefined && options.end === undefined && options.start === undefined) {
@@ -186,7 +193,7 @@ export default class RemoteHandlerAbstract {
path = normalizePath(path)
let checksumStream
input = this._encryptor.encryptStream(input)
input = this.#encryptor.encryptStream(input)
if (checksum) {
checksumStream = createChecksumStream()
pipeline(input, checksumStream, noop)
@@ -224,10 +231,10 @@ export default class RemoteHandlerAbstract {
assert.strictEqual(this.isEncrypted, false, `Can't compute size of an encrypted file ${file}`)
const size = await timeout.call(this._getSize(typeof file === 'string' ? normalizePath(file) : file), this._timeout)
return size - this._encryptor.ivLength
return size - this.#encryptor.ivLength
}
async list(dir, { filter, ignoreMissing = false, prependDir = false } = {}) {
async __list(dir, { filter, ignoreMissing = false, prependDir = false } = {}) {
try {
const virtualDir = normalizePath(dir)
dir = normalizePath(dir)
@@ -257,20 +264,12 @@ export default class RemoteHandlerAbstract {
return { dispose: await this._lock(path) }
}
async mkdir(dir, { mode } = {}) {
await this.__mkdir(normalizePath(dir), { mode })
}
async mktree(dir, { mode } = {}) {
await this._mktree(normalizePath(dir), { mode })
}
openFile(path, flags) {
return this.__openFile(path, flags)
}
async outputFile(file, data, { dirMode, flags = 'wx' } = {}) {
const encryptedData = this._encryptor.encryptData(data)
const encryptedData = this.#encryptor.encryptData(data)
await this._outputFile(normalizePath(file), encryptedData, { dirMode, flags })
}
@@ -279,9 +278,9 @@ export default class RemoteHandlerAbstract {
return this._read(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
}
async readFile(file, { flags = 'r' } = {}) {
async __readFile(file, { flags = 'r' } = {}) {
const data = await this._readFile(normalizePath(file), { flags })
return this._encryptor.decryptData(data)
return this.#encryptor.decryptData(data)
}
async #rename(oldPath, newPath, { checksum }, createTree = true) {
@@ -301,11 +300,11 @@ export default class RemoteHandlerAbstract {
}
}
rename(oldPath, newPath, { checksum = false } = {}) {
__rename(oldPath, newPath, { checksum = false } = {}) {
return this.#rename(normalizePath(oldPath), normalizePath(newPath), { checksum })
}
async copy(oldPath, newPath, { checksum = false } = {}) {
async __copy(oldPath, newPath, { checksum = false } = {}) {
oldPath = normalizePath(oldPath)
newPath = normalizePath(newPath)
@@ -332,33 +331,33 @@ export default class RemoteHandlerAbstract {
async sync() {
await this._sync()
try {
await this._checkMetadata()
await this.#checkMetadata()
} catch (error) {
await this._forget()
throw error
}
}
async _canWriteMetadata() {
const list = await this.list('/', {
async #canWriteMetadata() {
const list = await this.__list('/', {
filter: e => !e.startsWith('.') && e !== ENCRYPTION_DESC_FILENAME && e !== ENCRYPTION_METADATA_FILENAME,
})
return list.length === 0
}
async _createMetadata() {
async #createMetadata() {
const encryptionAlgorithm = this._remote.encryptionKey === undefined ? 'none' : DEFAULT_ENCRYPTION_ALGORITHM
this.#encryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
this.#rawEncryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
await Promise.all([
this._writeFile(normalizePath(ENCRYPTION_DESC_FILENAME), JSON.stringify({ algorithm: encryptionAlgorithm }), {
flags: 'w',
}), // not encrypted
this.writeFile(ENCRYPTION_METADATA_FILENAME, `{"random":"${randomUUID()}"}`, { flags: 'w' }), // encrypted
this.__writeFile(ENCRYPTION_METADATA_FILENAME, `{"random":"${randomUUID()}"}`, { flags: 'w' }), // encrypted
])
}
async _checkMetadata() {
async #checkMetadata() {
let encryptionAlgorithm = 'none'
let data
try {
@@ -374,18 +373,18 @@ export default class RemoteHandlerAbstract {
}
try {
this.#encryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
this.#rawEncryptor = _getEncryptor(encryptionAlgorithm, this._remote.encryptionKey)
// this file is encrypted
const data = await this.readFile(ENCRYPTION_METADATA_FILENAME, 'utf-8')
const data = await this.__readFile(ENCRYPTION_METADATA_FILENAME, 'utf-8')
JSON.parse(data)
} catch (error) {
// can be enoent, bad algorithm, or broeken json ( bad key or algorithm)
if (encryptionAlgorithm !== 'none') {
if (await this._canWriteMetadata()) {
if (await this.#canWriteMetadata()) {
// any other error , but on empty remote => update with remote settings
info('will update metadata of this remote')
return this._createMetadata()
return this.#createMetadata()
} else {
warn(
`The encryptionKey settings of this remote does not match the key used to create it. You won't be able to read any data from this remote`,
@@ -438,7 +437,7 @@ export default class RemoteHandlerAbstract {
await this._truncate(file, len)
}
async unlink(file, { checksum = true } = {}) {
async __unlink(file, { checksum = true } = {}) {
file = normalizePath(file)
if (checksum) {
@@ -453,8 +452,8 @@ export default class RemoteHandlerAbstract {
await this._write(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
}
async writeFile(file, data, { flags = 'wx' } = {}) {
const encryptedData = this._encryptor.encryptData(data)
async __writeFile(file, data, { flags = 'wx' } = {}) {
const encryptedData = this.#encryptor.encryptData(data)
await this._writeFile(normalizePath(file), encryptedData, { flags })
}
@@ -465,6 +464,8 @@ export default class RemoteHandlerAbstract {
}
async __mkdir(dir, { mode } = {}) {
dir = normalizePath(dir)
try {
await this._mkdir(dir, { mode })
} catch (error) {
@@ -586,9 +587,9 @@ export default class RemoteHandlerAbstract {
if (validator !== undefined) {
await validator.call(this, tmpPath)
}
await this.rename(tmpPath, path)
await this.__rename(tmpPath, path)
} catch (error) {
await this.unlink(tmpPath)
await this.__unlink(tmpPath)
throw error
}
}
@@ -665,7 +666,22 @@ export default class RemoteHandlerAbstract {
}
get isEncrypted() {
return this._encryptor.id !== 'NULL_ENCRYPTOR'
return this.#encryptor.id !== 'NULL_ENCRYPTOR'
}
}
// from implementation methods, which names start with `__`, create public
// accessors on which external behaviors can be added (e.g. concurrency limits, path rewriting)
{
const proto = RemoteHandlerAbstract.prototype
for (const method of Object.getOwnPropertyNames(proto)) {
if (method.startsWith('__')) {
const publicName = method.slice(2)
assert(!Object.hasOwn(proto, publicName))
Object.defineProperty(proto, publicName, Object.getOwnPropertyDescriptor(proto, method))
}
}
}

View File

@@ -1,11 +1,13 @@
/* eslint-env jest */
import { after, beforeEach, describe, it } from 'test'
import { strict as assert } from 'assert'
import sinon from 'sinon'
import { DEFAULT_ENCRYPTION_ALGORITHM, _getEncryptor } from './_encryptor'
import { Disposable, pFromCallback, TimeoutError } from 'promise-toolbox'
import { getSyncedHandler } from '.'
import { rimraf } from 'rimraf'
import AbstractHandler from './abstract'
import fs from 'fs-extra'
import rimraf from 'rimraf'
import tmp from 'tmp'
const TIMEOUT = 10e3
@@ -24,7 +26,7 @@ class TestHandler extends AbstractHandler {
const noop = Function.prototype
jest.useFakeTimers()
const clock = sinon.useFakeTimers()
describe('closeFile()', () => {
it(`throws in case of timeout`, async () => {
@@ -33,8 +35,8 @@ describe('closeFile()', () => {
})
const promise = testHandler.closeFile({ fd: undefined, path: '' })
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -45,8 +47,8 @@ describe('getInfo()', () => {
})
const promise = testHandler.getInfo()
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -57,8 +59,8 @@ describe('getSize()', () => {
})
const promise = testHandler.getSize('')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -69,8 +71,8 @@ describe('list()', () => {
})
const promise = testHandler.list('.')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -81,8 +83,8 @@ describe('openFile()', () => {
})
const promise = testHandler.openFile('path')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -93,8 +95,8 @@ describe('rename()', () => {
})
const promise = testHandler.rename('oldPath', 'newPath')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -105,8 +107,8 @@ describe('rmdir()', () => {
})
const promise = testHandler.rmdir('dir')
jest.advanceTimersByTime(TIMEOUT)
await expect(promise).rejects.toThrowError(TimeoutError)
clock.tick(TIMEOUT)
await assert.rejects(promise, TimeoutError)
})
})
@@ -115,14 +117,14 @@ describe('encryption', () => {
beforeEach(async () => {
dir = await pFromCallback(cb => tmp.dir(cb))
})
afterAll(async () => {
after(async () => {
await rimraf(dir)
})
it('sync should NOT create metadata if missing (not encrypted)', async () => {
await Disposable.use(getSyncedHandler({ url: `file://${dir}` }), noop)
expect(await fs.readdir(dir)).toEqual([])
assert.deepEqual(await fs.readdir(dir), [])
})
it('sync should create metadata if missing (encrypted)', async () => {
@@ -131,12 +133,12 @@ describe('encryption', () => {
noop
)
expect(await fs.readdir(dir)).toEqual(['encryption.json', 'metadata.json'])
assert.deepEqual(await fs.readdir(dir), ['encryption.json', 'metadata.json'])
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
// encrypted , should not be parsable
expect(async () => JSON.parse(await fs.readFile(`${dir}/metadata.json`))).rejects.toThrowError()
assert.rejects(async () => JSON.parse(await fs.readFile(`${dir}/metadata.json`)))
})
it('sync should not modify existing metadata', async () => {
@@ -146,9 +148,9 @@ describe('encryption', () => {
await Disposable.use(await getSyncedHandler({ url: `file://${dir}` }), noop)
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual('none')
assert.equal(encryption.algorithm, 'none')
const metadata = JSON.parse(await fs.readFile(`${dir}/metadata.json`, 'utf-8'))
expect(metadata.random).toEqual('NOTSORANDOM')
assert.equal(metadata.random, 'NOTSORANDOM')
})
it('should modify metadata if empty', async () => {
@@ -160,11 +162,11 @@ describe('encryption', () => {
noop
)
let encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
await Disposable.use(getSyncedHandler({ url: `file://${dir}` }), noop)
encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual('none')
assert.equal(encryption.algorithm, 'none')
})
it(
@@ -178,9 +180,9 @@ describe('encryption', () => {
const handler = yield getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd91"` })
const encryption = JSON.parse(await fs.readFile(`${dir}/encryption.json`, 'utf-8'))
expect(encryption.algorithm).toEqual(DEFAULT_ENCRYPTION_ALGORITHM)
assert.equal(encryption.algorithm, DEFAULT_ENCRYPTION_ALGORITHM)
const metadata = JSON.parse(await handler.readFile(`./metadata.json`))
expect(metadata.random).toEqual('NOTSORANDOM')
assert.equal(metadata.random, 'NOTSORANDOM')
})
)
@@ -198,9 +200,9 @@ describe('encryption', () => {
// remote is now non empty : can't modify key anymore
await fs.writeFile(`${dir}/nonempty.json`, 'content')
await expect(
await assert.rejects(
Disposable.use(getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd10"` }), noop)
).rejects.toThrowError()
)
})
it('sync should fail when changing algorithm', async () => {
@@ -213,8 +215,8 @@ describe('encryption', () => {
// remote is now non empty : can't modify key anymore
await fs.writeFile(`${dir}/nonempty.json`, 'content')
await expect(
await assert.rejects(
Disposable.use(getSyncedHandler({ url: `file://${dir}?encryptionKey="73c1838d7d8a6088ca2317fb5f29cd91"` }), noop)
).rejects.toThrowError()
)
})
})

View File

@@ -1,4 +1,5 @@
/* eslint-env jest */
import { after, afterEach, before, beforeEach, describe, it } from 'test'
import { strict as assert } from 'assert'
import 'dotenv/config'
import { forOwn, random } from 'lodash'
@@ -53,11 +54,11 @@ handlers.forEach(url => {
})
}
beforeAll(async () => {
before(async () => {
handler = getHandler({ url }).addPrefix(`xo-fs-tests-${Date.now()}`)
await handler.sync()
})
afterAll(async () => {
after(async () => {
await handler.forget()
handler = undefined
})
@@ -72,67 +73,63 @@ handlers.forEach(url => {
describe('#type', () => {
it('returns the type of the remote', () => {
expect(typeof handler.type).toBe('string')
assert.equal(typeof handler.type, 'string')
})
})
describe('#getInfo()', () => {
let info
beforeAll(async () => {
before(async () => {
info = await handler.getInfo()
})
it('should return an object with info', async () => {
expect(typeof info).toBe('object')
assert.equal(typeof info, 'object')
})
it('should return correct type of attribute', async () => {
if (info.size !== undefined) {
expect(typeof info.size).toBe('number')
assert.equal(typeof info.size, 'number')
}
if (info.used !== undefined) {
expect(typeof info.used).toBe('number')
assert.equal(typeof info.used, 'number')
}
})
})
describe('#getSize()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
before(() => handler.outputFile('file', TEST_DATA))
testWithFileDescriptor('file', 'r', async () => {
expect(await handler.getSize('file')).toEqual(TEST_DATA_LEN)
assert.equal(await handler.getSize('file'), TEST_DATA_LEN)
})
})
describe('#list()', () => {
it(`should list the content of folder`, async () => {
await handler.outputFile('file', TEST_DATA)
await expect(await handler.list('.')).toEqual(['file'])
assert.deepEqual(await handler.list('.'), ['file'])
})
it('can prepend the directory to entries', async () => {
await handler.outputFile('dir/file', '')
expect(await handler.list('dir', { prependDir: true })).toEqual(['/dir/file'])
})
it('can prepend the directory to entries', async () => {
await handler.outputFile('dir/file', '')
expect(await handler.list('dir', { prependDir: true })).toEqual(['/dir/file'])
assert.deepEqual(await handler.list('dir', { prependDir: true }), ['/dir/file'])
})
it('throws ENOENT if no such directory', async () => {
expect((await rejectionOf(handler.list('dir'))).code).toBe('ENOENT')
await handler.rmtree('dir')
assert.equal((await rejectionOf(handler.list('dir'))).code, 'ENOENT')
})
it('can returns empty for missing directory', async () => {
expect(await handler.list('dir', { ignoreMissing: true })).toEqual([])
assert.deepEqual(await handler.list('dir', { ignoreMissing: true }), [])
})
})
describe('#mkdir()', () => {
it('creates a directory', async () => {
await handler.mkdir('dir')
await expect(await handler.list('.')).toEqual(['dir'])
assert.deepEqual(await handler.list('.'), ['dir'])
})
it('does not throw on existing directory', async () => {
@@ -143,15 +140,15 @@ handlers.forEach(url => {
it('throws ENOTDIR on existing file', async () => {
await handler.outputFile('file', '')
const error = await rejectionOf(handler.mkdir('file'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
})
describe('#mktree()', () => {
it('creates a tree of directories', async () => {
await handler.mktree('dir/dir')
await expect(await handler.list('.')).toEqual(['dir'])
await expect(await handler.list('dir')).toEqual(['dir'])
assert.deepEqual(await handler.list('.'), ['dir'])
assert.deepEqual(await handler.list('dir'), ['dir'])
})
it('does not throw on existing directory', async () => {
@@ -162,26 +159,27 @@ handlers.forEach(url => {
it('throws ENOTDIR on existing file', async () => {
await handler.outputFile('dir/file', '')
const error = await rejectionOf(handler.mktree('dir/file'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
it('throws ENOTDIR on existing file in path', async () => {
await handler.outputFile('file', '')
const error = await rejectionOf(handler.mktree('file/dir'))
expect(error.code).toBe('ENOTDIR')
assert.equal(error.code, 'ENOTDIR')
})
})
describe('#outputFile()', () => {
it('writes data to a file', async () => {
await handler.outputFile('file', TEST_DATA)
expect(await handler.readFile('file')).toEqual(TEST_DATA)
assert.deepEqual(await handler.readFile('file'), TEST_DATA)
})
it('throws on existing files', async () => {
await handler.unlink('file')
await handler.outputFile('file', '')
const error = await rejectionOf(handler.outputFile('file', ''))
expect(error.code).toBe('EEXIST')
assert.equal(error.code, 'EEXIST')
})
it("shouldn't timeout in case of the respect of the parallel execution restriction", async () => {
@@ -192,7 +190,7 @@ handlers.forEach(url => {
})
describe('#read()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
before(() => handler.outputFile('file', TEST_DATA))
const start = random(TEST_DATA_LEN)
const size = random(TEST_DATA_LEN)
@@ -200,8 +198,8 @@ handlers.forEach(url => {
testWithFileDescriptor('file', 'r', async ({ file }) => {
const buffer = Buffer.alloc(size)
const result = await handler.read(file, buffer, start)
expect(result.buffer).toBe(buffer)
expect(result).toEqual({
assert.deepEqual(result.buffer, buffer)
assert.deepEqual(result, {
buffer,
bytesRead: Math.min(size, TEST_DATA_LEN - start),
})
@@ -211,12 +209,13 @@ handlers.forEach(url => {
describe('#readFile', () => {
it('returns a buffer containing the contents of the file', async () => {
await handler.outputFile('file', TEST_DATA)
expect(await handler.readFile('file')).toEqual(TEST_DATA)
assert.deepEqual(await handler.readFile('file'), TEST_DATA)
})
it('throws on missing file', async () => {
await handler.unlink('file')
const error = await rejectionOf(handler.readFile('file'))
expect(error.code).toBe('ENOENT')
assert.equal(error.code, 'ENOENT')
})
})
@@ -225,19 +224,19 @@ handlers.forEach(url => {
await handler.outputFile('file', TEST_DATA)
await handler.rename('file', `file2`)
expect(await handler.list('.')).toEqual(['file2'])
expect(await handler.readFile(`file2`)).toEqual(TEST_DATA)
assert.deepEqual(await handler.list('.'), ['file2'])
assert.deepEqual(await handler.readFile(`file2`), TEST_DATA)
})
it(`should rename the file and create dest directory`, async () => {
await handler.outputFile('file', TEST_DATA)
await handler.rename('file', `sub/file2`)
expect(await handler.list('sub')).toEqual(['file2'])
expect(await handler.readFile(`sub/file2`)).toEqual(TEST_DATA)
assert.deepEqual(await handler.list('sub'), ['file2'])
assert.deepEqual(await handler.readFile(`sub/file2`), TEST_DATA)
})
it(`should fail with enoent if source file is missing`, async () => {
const error = await rejectionOf(handler.rename('file', `sub/file2`))
expect(error.code).toBe('ENOENT')
assert.equal(error.code, 'ENOENT')
})
})
@@ -245,14 +244,15 @@ handlers.forEach(url => {
it('should remove an empty directory', async () => {
await handler.mkdir('dir')
await handler.rmdir('dir')
expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
it(`should throw on non-empty directory`, async () => {
await handler.outputFile('dir/file', '')
const error = await rejectionOf(handler.rmdir('.'))
await expect(error.code).toEqual('ENOTEMPTY')
assert.equal(error.code, 'ENOTEMPTY')
await handler.unlink('dir/file')
})
it('does not throw on missing directory', async () => {
@@ -265,7 +265,7 @@ handlers.forEach(url => {
await handler.outputFile('dir/file', '')
await handler.rmtree('dir')
expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
})
@@ -273,9 +273,9 @@ handlers.forEach(url => {
it('tests the remote appears to be working', async () => {
const answer = await handler.test()
expect(answer.success).toBe(true)
expect(typeof answer.writeRate).toBe('number')
expect(typeof answer.readRate).toBe('number')
assert.equal(answer.success, true)
assert.equal(typeof answer.writeRate, 'number')
assert.equal(typeof answer.readRate, 'number')
})
})
@@ -284,7 +284,7 @@ handlers.forEach(url => {
await handler.outputFile('file', TEST_DATA)
await handler.unlink('file')
await expect(await handler.list('.')).toEqual([])
assert.deepEqual(await handler.list('.'), [])
})
it('does not throw on missing file', async () => {
@@ -294,6 +294,7 @@ handlers.forEach(url => {
describe('#write()', () => {
beforeEach(() => handler.outputFile('file', TEST_DATA))
afterEach(() => handler.unlink('file'))
const PATCH_DATA_LEN = Math.ceil(TEST_DATA_LEN / 2)
const PATCH_DATA = unsecureRandomBytes(PATCH_DATA_LEN)
@@ -322,7 +323,7 @@ handlers.forEach(url => {
describe(title, () => {
testWithFileDescriptor('file', 'r+', async ({ file }) => {
await handler.write(file, PATCH_DATA, offset)
await expect(await handler.readFile('file')).toEqual(expected)
assert.deepEqual(await handler.readFile('file'), expected)
})
})
}
@@ -330,6 +331,7 @@ handlers.forEach(url => {
})
describe('#truncate()', () => {
afterEach(() => handler.unlink('file'))
forOwn(
{
'shrinks file': (() => {
@@ -348,7 +350,7 @@ handlers.forEach(url => {
it(title, async () => {
await handler.outputFile('file', TEST_DATA)
await handler.truncate('file', length)
await expect(await handler.readFile('file')).toEqual(expected)
assert.deepEqual(await handler.readFile('file'), expected)
})
}
)

View File

@@ -34,11 +34,14 @@ function dontAddSyncStackTrace(fn, ...args) {
}
export default class LocalHandler extends RemoteHandlerAbstract {
#addSyncStackTrace
#retriesOnEagain
constructor(remote, opts = {}) {
super(remote)
this._addSyncStackTrace = opts.syncStackTraces ?? true ? addSyncStackTrace : dontAddSyncStackTrace
this._retriesOnEagain = {
this.#addSyncStackTrace = opts.syncStackTraces ?? true ? addSyncStackTrace : dontAddSyncStackTrace
this.#retriesOnEagain = {
delay: 1e3,
retries: 9,
...opts.retriesOnEagain,
@@ -51,26 +54,26 @@ export default class LocalHandler extends RemoteHandlerAbstract {
return 'file'
}
_getRealPath() {
getRealPath() {
return this._remote.path
}
_getFilePath(file) {
return this._getRealPath() + file
getFilePath(file) {
return this.getRealPath() + file
}
async _closeFile(fd) {
return this._addSyncStackTrace(fs.close, fd)
return this.#addSyncStackTrace(fs.close, fd)
}
async _copy(oldPath, newPath) {
return this._addSyncStackTrace(fs.copy, this._getFilePath(oldPath), this._getFilePath(newPath))
return this.#addSyncStackTrace(fs.copy, this.getFilePath(oldPath), this.getFilePath(newPath))
}
async _createReadStream(file, options) {
if (typeof file === 'string') {
const stream = fs.createReadStream(this._getFilePath(file), options)
await this._addSyncStackTrace(fromEvent, stream, 'open')
const stream = fs.createReadStream(this.getFilePath(file), options)
await this.#addSyncStackTrace(fromEvent, stream, 'open')
return stream
}
return fs.createReadStream('', {
@@ -82,8 +85,8 @@ export default class LocalHandler extends RemoteHandlerAbstract {
async _createWriteStream(file, options) {
if (typeof file === 'string') {
const stream = fs.createWriteStream(this._getFilePath(file), options)
await this._addSyncStackTrace(fromEvent, stream, 'open')
const stream = fs.createWriteStream(this.getFilePath(file), options)
await this.#addSyncStackTrace(fromEvent, stream, 'open')
return stream
}
return fs.createWriteStream('', {
@@ -98,7 +101,7 @@ export default class LocalHandler extends RemoteHandlerAbstract {
// filesystem, type, size, used, available, capacity and mountpoint.
// size, used, available and capacity may be `NaN` so we remove any `NaN`
// value from the object.
const info = await df.file(this._getFilePath('/'))
const info = await df.file(this.getFilePath('/'))
Object.keys(info).forEach(key => {
if (Number.isNaN(info[key])) {
delete info[key]
@@ -109,16 +112,16 @@ export default class LocalHandler extends RemoteHandlerAbstract {
}
async _getSize(file) {
const stats = await this._addSyncStackTrace(fs.stat, this._getFilePath(typeof file === 'string' ? file : file.path))
const stats = await this.#addSyncStackTrace(fs.stat, this.getFilePath(typeof file === 'string' ? file : file.path))
return stats.size
}
async _list(dir) {
return this._addSyncStackTrace(fs.readdir, this._getFilePath(dir))
return this.#addSyncStackTrace(fs.readdir, this.getFilePath(dir))
}
async _lock(path) {
const acquire = lockfile.lock.bind(undefined, this._getFilePath(path), {
const acquire = lockfile.lock.bind(undefined, this.getFilePath(path), {
async onCompromised(error) {
warn('lock compromised', { error })
try {
@@ -130,11 +133,11 @@ export default class LocalHandler extends RemoteHandlerAbstract {
},
})
let release = await this._addSyncStackTrace(acquire)
let release = await this.#addSyncStackTrace(acquire)
return async () => {
try {
await this._addSyncStackTrace(release)
await this.#addSyncStackTrace(release)
} catch (error) {
warn('lock could not be released', { error })
}
@@ -142,18 +145,18 @@ export default class LocalHandler extends RemoteHandlerAbstract {
}
_mkdir(dir, { mode }) {
return this._addSyncStackTrace(fs.mkdir, this._getFilePath(dir), { mode })
return this.#addSyncStackTrace(fs.mkdir, this.getFilePath(dir), { mode })
}
async _openFile(path, flags) {
return this._addSyncStackTrace(fs.open, this._getFilePath(path), flags)
return this.#addSyncStackTrace(fs.open, this.getFilePath(path), flags)
}
async _read(file, buffer, position) {
const needsClose = typeof file === 'string'
file = needsClose ? await this._addSyncStackTrace(fs.open, this._getFilePath(file), 'r') : file.fd
file = needsClose ? await this.#addSyncStackTrace(fs.open, this.getFilePath(file), 'r') : file.fd
try {
return await this._addSyncStackTrace(
return await this.#addSyncStackTrace(
fs.read,
file,
buffer,
@@ -163,44 +166,44 @@ export default class LocalHandler extends RemoteHandlerAbstract {
)
} finally {
if (needsClose) {
await this._addSyncStackTrace(fs.close, file)
await this.#addSyncStackTrace(fs.close, file)
}
}
}
async _readFile(file, options) {
const filePath = this._getFilePath(file)
return await this._addSyncStackTrace(retry, () => fs.readFile(filePath, options), this._retriesOnEagain)
const filePath = this.getFilePath(file)
return await this.#addSyncStackTrace(retry, () => fs.readFile(filePath, options), this.#retriesOnEagain)
}
async _rename(oldPath, newPath) {
return this._addSyncStackTrace(fs.rename, this._getFilePath(oldPath), this._getFilePath(newPath))
return this.#addSyncStackTrace(fs.rename, this.getFilePath(oldPath), this.getFilePath(newPath))
}
async _rmdir(dir) {
return this._addSyncStackTrace(fs.rmdir, this._getFilePath(dir))
return this.#addSyncStackTrace(fs.rmdir, this.getFilePath(dir))
}
async _sync() {
const path = this._getRealPath('/')
await this._addSyncStackTrace(fs.ensureDir, path)
await this._addSyncStackTrace(fs.access, path, fs.R_OK | fs.W_OK)
const path = this.getRealPath('/')
await this.#addSyncStackTrace(fs.ensureDir, path)
await this.#addSyncStackTrace(fs.access, path, fs.R_OK | fs.W_OK)
}
_truncate(file, len) {
return this._addSyncStackTrace(fs.truncate, this._getFilePath(file), len)
return this.#addSyncStackTrace(fs.truncate, this.getFilePath(file), len)
}
async _unlink(file) {
const filePath = this._getFilePath(file)
return await this._addSyncStackTrace(retry, () => fs.unlink(filePath), this._retriesOnEagain)
const filePath = this.getFilePath(file)
return await this.#addSyncStackTrace(retry, () => fs.unlink(filePath), this.#retriesOnEagain)
}
_writeFd(file, buffer, position) {
return this._addSyncStackTrace(fs.write, file.fd, buffer, 0, buffer.length, position)
return this.#addSyncStackTrace(fs.write, file.fd, buffer, 0, buffer.length, position)
}
_writeFile(file, data, { flags }) {
return this._addSyncStackTrace(fs.writeFile, this._getFilePath(file), data, { flag: flags })
return this.#addSyncStackTrace(fs.writeFile, this.getFilePath(file), data, { flag: flags })
}
}

View File

@@ -34,6 +34,10 @@ const MAX_PART_SIZE = 1024 * 1024 * 1024 * 5 // 5GB
const { warn } = createLogger('xo:fs:s3')
export default class S3Handler extends RemoteHandlerAbstract {
#bucket
#dir
#s3
constructor(remote, _opts) {
super(remote)
const {
@@ -46,7 +50,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
region = guessAwsRegion(host),
} = parse(remote.url)
this._s3 = new S3Client({
this.#s3 = new S3Client({
apiVersion: '2006-03-01',
endpoint: `${protocol}://${host}`,
forcePathStyle: true,
@@ -69,27 +73,27 @@ export default class S3Handler extends RemoteHandlerAbstract {
})
// Workaround for https://github.com/aws/aws-sdk-js-v3/issues/2673
this._s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this._s3.config))
this.#s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this.#s3.config))
const parts = split(path)
this._bucket = parts.shift()
this._dir = join(...parts)
this.#bucket = parts.shift()
this.#dir = join(...parts)
}
get type() {
return 's3'
}
_makeCopySource(path) {
return join(this._bucket, this._dir, path)
#makeCopySource(path) {
return join(this.#bucket, this.#dir, path)
}
_makeKey(file) {
return join(this._dir, file)
#makeKey(file) {
return join(this.#dir, file)
}
_makePrefix(dir) {
const prefix = join(this._dir, dir, '/')
#makePrefix(dir) {
const prefix = join(this.#dir, dir, '/')
// no prefix for root
if (prefix !== './') {
@@ -97,20 +101,20 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
}
_createParams(file) {
return { Bucket: this._bucket, Key: this._makeKey(file) }
#createParams(file) {
return { Bucket: this.#bucket, Key: this.#makeKey(file) }
}
async _multipartCopy(oldPath, newPath) {
async #multipartCopy(oldPath, newPath) {
const size = await this._getSize(oldPath)
const CopySource = this._makeCopySource(oldPath)
const multipartParams = await this._s3.send(new CreateMultipartUploadCommand({ ...this._createParams(newPath) }))
const CopySource = this.#makeCopySource(oldPath)
const multipartParams = await this.#s3.send(new CreateMultipartUploadCommand({ ...this.#createParams(newPath) }))
try {
const parts = []
let start = 0
while (start < size) {
const partNumber = parts.length + 1
const upload = await this._s3.send(
const upload = await this.#s3.send(
new UploadPartCopyCommand({
...multipartParams,
CopySource,
@@ -121,31 +125,31 @@ export default class S3Handler extends RemoteHandlerAbstract {
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partNumber })
start += MAX_PART_SIZE
}
await this._s3.send(
await this.#s3.send(
new CompleteMultipartUploadCommand({
...multipartParams,
MultipartUpload: { Parts: parts },
})
)
} catch (e) {
await this._s3.send(new AbortMultipartUploadCommand(multipartParams))
await this.#s3.send(new AbortMultipartUploadCommand(multipartParams))
throw e
}
}
async _copy(oldPath, newPath) {
const CopySource = this._makeCopySource(oldPath)
const CopySource = this.#makeCopySource(oldPath)
try {
await this._s3.send(
await this.#s3.send(
new CopyObjectCommand({
...this._createParams(newPath),
...this.#createParams(newPath),
CopySource,
})
)
} catch (e) {
// object > 5GB must be copied part by part
if (e.name === 'EntityTooLarge') {
return this._multipartCopy(oldPath, newPath)
return this.#multipartCopy(oldPath, newPath)
}
// normalize this error code
if (e.name === 'NoSuchKey') {
@@ -159,20 +163,20 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
}
async _isNotEmptyDir(path) {
const result = await this._s3.send(
async #isNotEmptyDir(path) {
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
MaxKeys: 1,
Prefix: this._makePrefix(path),
Prefix: this.#makePrefix(path),
})
)
return result.Contents?.length > 0
}
async _isFile(path) {
async #isFile(path) {
try {
await this._s3.send(new HeadObjectCommand(this._createParams(path)))
await this.#s3.send(new HeadObjectCommand(this.#createParams(path)))
return true
} catch (error) {
if (error.name === 'NotFound') {
@@ -189,9 +193,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
pipeline(input, Body, () => {})
const upload = new Upload({
client: this._s3,
client: this.#s3,
params: {
...this._createParams(path),
...this.#createParams(path),
Body,
},
})
@@ -202,7 +206,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
try {
await validator.call(this, path)
} catch (error) {
await this.unlink(path)
await this.__unlink(path)
throw error
}
}
@@ -224,9 +228,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
},
})
async _writeFile(file, data, options) {
return this._s3.send(
return this.#s3.send(
new PutObjectCommand({
...this._createParams(file),
...this.#createParams(file),
Body: data,
})
)
@@ -234,7 +238,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
async _createReadStream(path, options) {
try {
return (await this._s3.send(new GetObjectCommand(this._createParams(path)))).Body
return (await this.#s3.send(new GetObjectCommand(this.#createParams(path)))).Body
} catch (e) {
if (e.name === 'NoSuchKey') {
const error = new Error(`ENOENT: no such file '${path}'`)
@@ -247,9 +251,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _unlink(path) {
await this._s3.send(new DeleteObjectCommand(this._createParams(path)))
await this.#s3.send(new DeleteObjectCommand(this.#createParams(path)))
if (await this._isNotEmptyDir(path)) {
if (await this.#isNotEmptyDir(path)) {
const error = new Error(`EISDIR: illegal operation on a directory, unlink '${path}'`)
error.code = 'EISDIR'
error.path = path
@@ -260,12 +264,12 @@ export default class S3Handler extends RemoteHandlerAbstract {
async _list(dir) {
let NextContinuationToken
const uniq = new Set()
const Prefix = this._makePrefix(dir)
const Prefix = this.#makePrefix(dir)
do {
const result = await this._s3.send(
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
Prefix,
Delimiter: '/',
// will only return path until delimiters
@@ -295,7 +299,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _mkdir(path) {
if (await this._isFile(path)) {
if (await this.#isFile(path)) {
const error = new Error(`ENOTDIR: file already exists, mkdir '${path}'`)
error.code = 'ENOTDIR'
error.path = path
@@ -306,15 +310,15 @@ export default class S3Handler extends RemoteHandlerAbstract {
// s3 doesn't have a rename operation, so copy + delete source
async _rename(oldPath, newPath) {
await this.copy(oldPath, newPath)
await this._s3.send(new DeleteObjectCommand(this._createParams(oldPath)))
await this.__copy(oldPath, newPath)
await this.#s3.send(new DeleteObjectCommand(this.#createParams(oldPath)))
}
async _getSize(file) {
if (typeof file !== 'string') {
file = file.fd
}
const result = await this._s3.send(new HeadObjectCommand(this._createParams(file)))
const result = await this.#s3.send(new HeadObjectCommand(this.#createParams(file)))
return +result.ContentLength
}
@@ -322,15 +326,15 @@ export default class S3Handler extends RemoteHandlerAbstract {
if (typeof file !== 'string') {
file = file.fd
}
const params = this._createParams(file)
const params = this.#createParams(file)
params.Range = `bytes=${position}-${position + buffer.length - 1}`
try {
const result = await this._s3.send(new GetObjectCommand(params))
const result = await this.#s3.send(new GetObjectCommand(params))
const bytesRead = await copyStreamToBuffer(result.Body, buffer)
return { bytesRead, buffer }
} catch (e) {
if (e.name === 'NoSuchKey') {
if (await this._isNotEmptyDir(file)) {
if (await this.#isNotEmptyDir(file)) {
const error = new Error(`${file} is a directory`)
error.code = 'EISDIR'
error.path = file
@@ -342,7 +346,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _rmdir(path) {
if (await this._isNotEmptyDir(path)) {
if (await this.#isNotEmptyDir(path)) {
const error = new Error(`ENOTEMPTY: directory not empty, rmdir '${path}`)
error.code = 'ENOTEMPTY'
error.path = path
@@ -356,11 +360,11 @@ export default class S3Handler extends RemoteHandlerAbstract {
// @todo : use parallel processing for unlink
async _rmtree(path) {
let NextContinuationToken
const Prefix = this._makePrefix(path)
const Prefix = this.#makePrefix(path)
do {
const result = await this._s3.send(
const result = await this.#s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Bucket: this.#bucket,
Prefix,
ContinuationToken: NextContinuationToken,
})
@@ -372,9 +376,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
async ({ Key }) => {
// _unlink will add the prefix, but Key contains everything
// also we don't need to check if we delete a directory, since the list only return files
await this._s3.send(
await this.#s3.send(
new DeleteObjectCommand({
Bucket: this._bucket,
Bucket: this.#bucket,
Key,
})
)

View File

@@ -19,8 +19,8 @@
"@types/d3-time-format": "^4.0.0",
"@types/lodash-es": "^4.17.6",
"@types/marked": "^4.0.8",
"@vueuse/core": "^9.5.0",
"@vueuse/math": "^9.5.0",
"@vueuse/core": "^10.1.2",
"@vueuse/math": "^10.1.2",
"complex-matcher": "^0.7.0",
"d3-time-format": "^4.1.0",
"decorator-synchronized": "^0.6.0",
@@ -34,19 +34,19 @@
"lodash-es": "^4.17.21",
"make-error": "^1.3.6",
"marked": "^4.2.12",
"pinia": "^2.0.14",
"pinia": "^2.1.2",
"placement.js": "^1.0.0-beta.5",
"vue": "^3.2.37",
"vue": "^3.3.4",
"vue-echarts": "^6.2.3",
"vue-i18n": "9",
"vue-router": "^4.0.16"
"vue-i18n": "^9.2.2",
"vue-router": "^4.2.1"
},
"devDependencies": {
"@intlify/vite-plugin-vue-i18n": "^6.0.1",
"@intlify/unplugin-vue-i18n": "^0.10.0",
"@limegrass/eslint-plugin-import-alias": "^1.0.5",
"@rushstack/eslint-patch": "^1.1.0",
"@types/node": "^16.11.41",
"@vitejs/plugin-vue": "^3.2.0",
"@vitejs/plugin-vue": "^4.2.3",
"@vue/eslint-config-prettier": "^7.0.0",
"@vue/eslint-config-typescript": "^11.0.0",
"@vue/tsconfig": "^0.1.3",
@@ -56,9 +56,9 @@
"postcss-custom-media": "^9.0.1",
"postcss-nested": "^6.0.0",
"typescript": "^4.9.3",
"vite": "^3.2.4",
"vite-plugin-pages": "^0.27.1",
"vue-tsc": "^1.0.9"
"vite": "^4.3.8",
"vite-plugin-pages": "^0.29.1",
"vue-tsc": "^1.6.5"
},
"private": true,
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/lite",

View File

@@ -1,25 +1,5 @@
<template>
<UiModal
v-if="isSslModalOpen"
:icon="faServer"
color="error"
@close="clearUnreachableHostsUrls"
>
<template #title>{{ $t("unreachable-hosts") }}</template>
<template #subtitle>{{ $t("following-hosts-unreachable") }}</template>
<p>{{ $t("allow-self-signed-ssl") }}</p>
<ul>
<li v-for="url in unreachableHostsUrls" :key="url.hostname">
<a :href="url.href" rel="noopener" target="_blank">{{ url.href }}</a>
</li>
</ul>
<template #buttons>
<UiButton color="success" @click="reload">
{{ $t("unreachable-hosts-reload-page") }}
</UiButton>
<UiButton @click="clearUnreachableHostsUrls">{{ $t("cancel") }}</UiButton>
</template>
</UiModal>
<UnreachableHostsModal />
<div v-if="!$route.meta.hasStoryNav && !xenApiStore.isConnected">
<AppLogin />
</div>
@@ -41,21 +21,14 @@ import AppHeader from "@/components/AppHeader.vue";
import AppLogin from "@/components/AppLogin.vue";
import AppNavigation from "@/components/AppNavigation.vue";
import AppTooltips from "@/components/AppTooltips.vue";
import UiButton from "@/components/ui/UiButton.vue";
import UiModal from "@/components/ui/UiModal.vue";
import UnreachableHostsModal from "@/components/UnreachableHostsModal.vue";
import { useChartTheme } from "@/composables/chart-theme.composable";
import { useHostStore } from "@/stores/host.store";
import { usePoolStore } from "@/stores/pool.store";
import { useUiStore } from "@/stores/ui.store";
import { useXenApiStore } from "@/stores/xen-api.store";
import { faServer } from "@fortawesome/free-solid-svg-icons";
import { useActiveElement, useMagicKeys, whenever } from "@vueuse/core";
import { logicAnd } from "@vueuse/math";
import { difference } from "lodash-es";
import { computed, ref, watch } from "vue";
const unreachableHostsUrls = ref<URL[]>([]);
const clearUnreachableHostsUrls = () => (unreachableHostsUrls.value = []);
import { computed } from "vue";
let link = document.querySelector(
"link[rel~='icon']"
@@ -70,7 +43,6 @@ link.href = favicon;
document.title = "XO Lite";
const xenApiStore = useXenApiStore();
const { records: hosts } = useHostStore().subscribe();
const { pool } = usePoolStore().subscribe();
useChartTheme();
const uiStore = useUiStore();
@@ -93,17 +65,6 @@ if (import.meta.env.DEV) {
);
}
watch(hosts, (hosts, previousHosts) => {
difference(hosts, previousHosts).forEach((host) => {
const url = new URL("http://localhost");
url.protocol = window.location.protocol;
url.hostname = host.address;
fetch(url, { mode: "no-cors" }).catch(() =>
unreachableHostsUrls.value.push(url)
);
});
});
whenever(
() => pool.value?.$ref,
async (poolRef) => {
@@ -112,9 +73,6 @@ whenever(
await xenApi.startWatch();
}
);
const isSslModalOpen = computed(() => unreachableHostsUrls.value.length > 0);
const reload = () => window.location.reload();
</script>
<style lang="postcss">

View File

@@ -1,15 +1,15 @@
<template>
<div v-if="!isDisabled" ref="tooltipElement" class="app-tooltip">
<span class="triangle" />
<span class="label">{{ content }}</span>
<span class="label">{{ options.content }}</span>
</div>
</template>
<script lang="ts" setup>
import { isEmpty, isFunction, isString } from "lodash-es";
import type { TooltipOptions } from "@/stores/tooltip.store";
import { isString } from "lodash-es";
import place from "placement.js";
import { computed, ref, watchEffect } from "vue";
import type { TooltipOptions } from "@/stores/tooltip.store";
const props = defineProps<{
target: HTMLElement;
@@ -18,29 +18,13 @@ const props = defineProps<{
const tooltipElement = ref<HTMLElement>();
const content = computed(() =>
isString(props.options) ? props.options : props.options.content
const isDisabled = computed(() =>
isString(props.options.content)
? props.options.content.trim() === ""
: props.options.content === false
);
const isDisabled = computed(() => {
if (isEmpty(content.value)) {
return true;
}
if (isString(props.options)) {
return false;
}
if (isFunction(props.options.disabled)) {
return props.options.disabled(props.target);
}
return props.options.disabled ?? false;
});
const placement = computed(() =>
isString(props.options) ? "top" : props.options.placement ?? "top"
);
const placement = computed(() => props.options.placement ?? "top");
watchEffect(() => {
if (tooltipElement.value) {

View File

@@ -14,7 +14,12 @@
</UiActionButton>
</UiFilterGroup>
<UiModal v-if="isOpen" :icon="faFilter" @submit.prevent="handleSubmit">
<UiModal
v-if="isOpen"
:icon="faFilter"
@submit.prevent="handleSubmit"
@close="handleCancel"
>
<div class="rows">
<CollectionFilterRow
v-for="(newFilter, index) in newFilters"

View File

@@ -17,7 +17,12 @@
</UiActionButton>
</UiFilterGroup>
<UiModal v-if="isOpen" :icon="faSort" @submit.prevent="handleSubmit">
<UiModal
v-if="isOpen"
:icon="faSort"
@submit.prevent="handleSubmit"
@close="handleCancel"
>
<div class="form-widgets">
<FormWidget :label="$t('sort-by')">
<select v-model="newSortProperty">

View File

@@ -0,0 +1,59 @@
<template>
<UiModal
v-if="isSslModalOpen"
:icon="faServer"
color="error"
@close="clearUnreachableHostsUrls"
>
<template #title>{{ $t("unreachable-hosts") }}</template>
<div class="description">
<p>{{ $t("following-hosts-unreachable") }}</p>
<p>{{ $t("allow-self-signed-ssl") }}</p>
<ul>
<li v-for="url in unreachableHostsUrls" :key="url">
<a :href="url" class="link" rel="noopener" target="_blank">{{
url
}}</a>
</li>
</ul>
</div>
<template #buttons>
<UiButton color="success" @click="reload">
{{ $t("unreachable-hosts-reload-page") }}
</UiButton>
<UiButton @click="clearUnreachableHostsUrls">{{ $t("cancel") }}</UiButton>
</template>
</UiModal>
</template>
<script lang="ts" setup>
import { faServer } from "@fortawesome/free-solid-svg-icons";
import UiModal from "@/components/ui/UiModal.vue";
import UiButton from "@/components/ui/UiButton.vue";
import { computed, ref, watch } from "vue";
import { difference } from "lodash";
import { useHostStore } from "@/stores/host.store";
const { records: hosts } = useHostStore().subscribe();
const unreachableHostsUrls = ref<Set<string>>(new Set());
const clearUnreachableHostsUrls = () => unreachableHostsUrls.value.clear();
const isSslModalOpen = computed(() => unreachableHostsUrls.value.size > 0);
const reload = () => window.location.reload();
watch(hosts, (nextHosts, previousHosts) => {
difference(nextHosts, previousHosts).forEach((host) => {
const url = new URL("http://localhost");
url.protocol = window.location.protocol;
url.hostname = host.address;
fetch(url, { mode: "no-cors" }).catch(() =>
unreachableHostsUrls.value.add(url.toString())
);
});
});
</script>
<style lang="postcss" scoped>
.description p {
margin: 1rem 0;
}
</style>

View File

@@ -4,11 +4,11 @@
<div
v-for="item in computedData.sortedArray"
:key="item.id"
class="progress-item"
:class="{
warning: item.value > MIN_WARNING_VALUE,
error: item.value > MIN_DANGEROUS_VALUE,
}"
class="progress-item"
>
<UiProgressBar :value="item.value" color="custom" />
<UiProgressLegend
@@ -18,15 +18,15 @@
</div>
<slot :total-percent="computedData.totalPercentUsage" name="footer" />
</template>
<UiSpinner v-else class="spinner" />
<UiCardSpinner v-else />
</div>
</template>
<script lang="ts" setup>
import { computed } from "vue";
import UiProgressBar from "@/components/ui/progress/UiProgressBar.vue";
import UiProgressLegend from "@/components/ui/progress/UiProgressLegend.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import { computed } from "vue";
interface Data {
id: string;
@@ -67,14 +67,6 @@ const computedData = computed(() => {
</script>
<style lang="postcss" scoped>
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: auto;
width: 40px;
height: 40px;
}
.progress-item:nth-child(1) {
--progress-bar-color: var(--color-extra-blue-d60);
}
@@ -91,9 +83,11 @@ const computedData = computed(() => {
--progress-bar-height: 1.2rem;
--progress-bar-color: var(--color-extra-blue-l20);
--progress-bar-background-color: var(--color-blue-scale-400);
&.warning {
--progress-bar-color: var(--color-orange-world-base);
}
&.error {
--progress-bar-color: var(--color-red-vates-base);
}

View File

@@ -18,33 +18,19 @@
</component>
</template>
<script lang="ts">
export default {
name: "FormCheckbox",
inheritAttrs: false,
};
</script>
<script lang="ts" setup>
import {
type HTMLAttributes,
type InputHTMLAttributes,
computed,
inject,
ref,
} from "vue";
import { type HTMLAttributes, computed, inject, ref } from "vue";
import { faCheck, faCircle, faMinus } from "@fortawesome/free-solid-svg-icons";
import { useVModel } from "@vueuse/core";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
// Temporary workaround for https://github.com/vuejs/core/issues/4294
interface Props extends Omit<InputHTMLAttributes, ""> {
defineOptions({ inheritAttrs: false });
const props = defineProps<{
modelValue?: unknown;
disabled?: boolean;
wrapperAttrs?: HTMLAttributes;
}
const props = defineProps<Props>();
}>();
const emit = defineEmits<{
(event: "update:modelValue", value: boolean): void;

View File

@@ -44,17 +44,9 @@
</span>
</template>
<script lang="ts">
export default {
name: "FormInput",
inheritAttrs: false,
};
</script>
<script lang="ts" setup>
import {
type HTMLAttributes,
type InputHTMLAttributes,
computed,
inject,
nextTick,
@@ -67,20 +59,22 @@ import { faAngleDown } from "@fortawesome/free-solid-svg-icons";
import { useTextareaAutosize, useVModel } from "@vueuse/core";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
// Temporary workaround for https://github.com/vuejs/core/issues/4294
interface Props extends Omit<InputHTMLAttributes, ""> {
modelValue?: unknown;
color?: Color;
before?: Omit<IconDefinition, ""> | string;
after?: Omit<IconDefinition, ""> | string;
beforeWidth?: string;
afterWidth?: string;
disabled?: boolean;
right?: boolean;
wrapperAttrs?: HTMLAttributes;
}
defineOptions({ inheritAttrs: false });
const props = withDefaults(defineProps<Props>(), { color: "info" });
const props = withDefaults(
defineProps<{
modelValue?: any;
color?: Color;
before?: IconDefinition | string;
after?: IconDefinition | string;
beforeWidth?: string;
afterWidth?: string;
disabled?: boolean;
right?: boolean;
wrapperAttrs?: HTMLAttributes;
}>(),
{ color: "info" }
);
const inputElement = ref();

View File

@@ -0,0 +1,41 @@
<template>
<div class="form-input-group">
<slot />
</div>
</template>
<style lang="postcss" scoped>
.form-input-group {
display: inline-flex;
align-items: center;
:slotted(.form-input),
:slotted(.form-select) {
&:hover {
z-index: 1;
}
&:focus-within {
z-index: 2;
}
&:not(:first-child) {
margin-left: -1px;
.input,
.select {
border-top-left-radius: 0;
border-bottom-left-radius: 0;
}
}
&:not(:last-child) {
.input,
.select {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
}
}
}
</style>

View File

@@ -1,12 +1,5 @@
<template>
<li
v-if="host !== undefined"
v-tooltip="{
content: host.name_label,
disabled: isTooltipDisabled,
}"
class="infra-host-item"
>
<li v-if="host !== undefined" class="infra-host-item">
<InfraItemLabel
:active="isCurrentHost"
:icon="faServer"
@@ -36,7 +29,6 @@ import InfraAction from "@/components/infra/InfraAction.vue";
import InfraItemLabel from "@/components/infra/InfraItemLabel.vue";
import InfraVmList from "@/components/infra/InfraVmList.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { useHostStore } from "@/stores/host.store";
import { usePoolStore } from "@/stores/pool.store";
import { useUiStore } from "@/stores/ui.store";
@@ -66,9 +58,6 @@ const isCurrentHost = computed(
() => props.hostOpaqueRef === uiStore.currentHostOpaqueRef
);
const [isExpanded, toggle] = useToggle(true);
const isTooltipDisabled = (target: HTMLElement) =>
!hasEllipsis(target.querySelector(".text"));
</script>
<style lang="postcss" scoped>

View File

@@ -7,9 +7,9 @@
class="infra-item-label"
v-bind="$attrs"
>
<a :href="href" class="link" @click="navigate">
<a :href="href" class="link" @click="navigate" v-tooltip="hasTooltip">
<UiIcon :icon="icon" class="icon" />
<div class="text">
<div ref="textElement" class="text">
<slot />
</div>
</a>
@@ -22,7 +22,10 @@
<script lang="ts" setup>
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import type { IconDefinition } from "@fortawesome/fontawesome-common-types";
import { computed, ref } from "vue";
import type { RouteLocationRaw } from "vue-router";
defineProps<{
@@ -30,6 +33,9 @@ defineProps<{
route: RouteLocationRaw;
active?: boolean;
}>();
const textElement = ref<HTMLElement>();
const hasTooltip = computed(() => hasEllipsis(textElement.value));
</script>
<style lang="postcss" scoped>

View File

@@ -1,13 +1,5 @@
<template>
<li
v-if="vm !== undefined"
ref="rootElement"
v-tooltip="{
content: vm.name_label,
disabled: isTooltipDisabled,
}"
class="infra-vm-item"
>
<li v-if="vm !== undefined" ref="rootElement" class="infra-vm-item">
<InfraItemLabel
v-if="isVisible"
:icon="faDisplay"
@@ -27,8 +19,6 @@
import InfraAction from "@/components/infra/InfraAction.vue";
import InfraItemLabel from "@/components/infra/InfraItemLabel.vue";
import PowerStateIcon from "@/components/PowerStateIcon.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { useVmStore } from "@/stores/vm.store";
import { faDisplay } from "@fortawesome/free-solid-svg-icons";
import { useIntersectionObserver } from "@vueuse/core";
@@ -49,9 +39,6 @@ const { stop } = useIntersectionObserver(rootElement, ([entry]) => {
stop();
}
});
const isTooltipDisabled = (target: HTMLElement) =>
!hasEllipsis(target.querySelector(".text"));
</script>
<style lang="postcss" scoped>

View File

@@ -1,13 +1,14 @@
<template>
<UiCard>
<UiCard :color="hasError ? 'error' : undefined">
<UiCardTitle>
{{ $t("cpu-provisioning") }}
<template #right>
<template v-if="!hasError" #right>
<!-- TODO: add a tooltip for the warning icon -->
<UiStatusIcon v-if="state !== 'success'" :state="state" />
</template>
</UiCardTitle>
<div v-if="isReady" :class="state" class="progress-item">
<NoDataError v-if="hasError" />
<div v-else-if="isReady" :class="state" class="progress-item">
<UiProgressBar :max-value="maxValue" :value="value" color="custom" />
<UiProgressScale :max-value="maxValue" :steps="1" unit="%" />
<UiProgressLegend :label="$t('vcpus')" :value="`${value}%`" />
@@ -22,19 +23,20 @@
</template>
</UiCardFooter>
</div>
<UiSpinner v-else class="spinner" />
<UiCardSpinner v-else />
</UiCard>
</template>
<script lang="ts" setup>
import NoDataError from "@/components/NoDataError.vue";
import UiStatusIcon from "@/components/ui/icon/UiStatusIcon.vue";
import UiProgressBar from "@/components/ui/progress/UiProgressBar.vue";
import UiProgressLegend from "@/components/ui/progress/UiProgressLegend.vue";
import UiProgressScale from "@/components/ui/progress/UiProgressScale.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiCardFooter from "@/components/ui/UiCardFooter.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import { percent } from "@/libs/utils";
import { useHostMetricsStore } from "@/stores/host-metrics.store";
import { useHostStore } from "@/stores/host.store";
@@ -45,11 +47,19 @@ import { computed } from "vue";
const ACTIVE_STATES = new Set(["Running", "Paused"]);
const { isReady: isHostStoreReady, runningHosts } = useHostStore().subscribe({
const {
hasError: hostStoreHasError,
isReady: isHostStoreReady,
runningHosts,
} = useHostStore().subscribe({
hostMetricsSubscription: useHostMetricsStore().subscribe(),
});
const { records: vms, isReady: isVmStoreReady } = useVmStore().subscribe();
const {
hasError: vmStoreHasError,
isReady: isVmStoreReady,
records: vms,
} = useVmStore().subscribe();
const { getByOpaqueRef: getVmMetrics, isReady: isVmMetricsStoreReady } =
useVmMetricsStore().subscribe();
@@ -84,6 +94,9 @@ const isReady = logicAnd(
isHostStoreReady,
isVmMetricsStoreReady
);
const hasError = computed(
() => hostStoreHasError.value || vmStoreHasError.value
);
</script>
<style lang="postcss" scoped>
@@ -102,12 +115,4 @@ const isReady = logicAnd(
color: var(--footer-value-color);
}
}
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: 2.6rem auto auto auto;
width: 40px;
height: 40px;
}
</style>

View File

@@ -2,7 +2,7 @@
<UiCard :color="hasError ? 'error' : undefined">
<UiCardTitle>{{ $t("status") }}</UiCardTitle>
<NoDataError v-if="hasError" />
<UiSpinner v-else-if="!isReady" class="spinner" />
<UiCardSpinner v-else-if="!isReady" />
<template v-else>
<PoolDashboardStatusItem
:active="activeHostsCount"
@@ -23,9 +23,9 @@
import NoDataError from "@/components/NoDataError.vue";
import PoolDashboardStatusItem from "@/components/pool/dashboard/PoolDashboardStatusItem.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
import UiSeparator from "@/components/ui/UiSeparator.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import { useHostMetricsStore } from "@/stores/host-metrics.store";
import { useVmStore } from "@/stores/vm.store";
import { computed } from "vue";
@@ -57,13 +57,3 @@ const totalVmsCount = computed(() => vms.value.length);
const activeVmsCount = computed(() => runningVms.value.length);
</script>
<style lang="postcss" scoped>
.spinner {
color: var(--color-extra-blue-base);
display: flex;
margin: auto;
width: 40px;
height: 40px;
}
</style>

View File

@@ -16,6 +16,7 @@ defineProps<{
<style lang="postcss" scoped>
.ui-badge {
white-space: nowrap;
display: inline-flex;
align-items: center;
gap: 0.4rem;

View File

@@ -0,0 +1,23 @@
<template>
<div class="ui-card-spinner">
<UiSpinner class="spinner" />
</div>
</template>
<script lang="ts" setup>
import UiSpinner from "@/components/ui/UiSpinner.vue";
</script>
<style lang="postcss" scoped>
.ui-card-spinner {
display: flex;
align-items: center;
justify-content: center;
padding: 4rem 0;
}
.spinner {
color: var(--color-extra-blue-base);
font-size: 4rem;
}
</style>

View File

@@ -1,7 +1,13 @@
<template>
<div class="legend">
<span class="circle" />
<slot name="label">{{ label }}</slot>
<template v-if="$slots.label || label">
<span class="circle" />
<div class="label-container">
<div ref="labelElement" v-tooltip="isTooltipEnabled" class="label">
<slot name="label">{{ label }}</slot>
</div>
</div>
</template>
<UiBadge class="badge">
<slot name="value">{{ value }}</slot>
</UiBadge>
@@ -10,14 +16,23 @@
<script lang="ts" setup>
import UiBadge from "@/components/ui/UiBadge.vue";
import { vTooltip } from "@/directives/tooltip.directive";
import { hasEllipsis } from "@/libs/utils";
import { computed, ref } from "vue";
defineProps<{
label?: string;
value?: string;
}>();
const labelElement = ref<HTMLElement>();
const isTooltipEnabled = computed(() =>
hasEllipsis(labelElement.value, { vertical: true })
);
</script>
<style scoped lang="postcss">
<style lang="postcss" scoped>
.badge {
font-size: 0.9em;
font-weight: 700;
@@ -25,8 +40,8 @@ defineProps<{
.circle {
display: inline-block;
width: 1rem;
height: 1rem;
min-width: 1rem;
min-height: 1rem;
border-radius: 0.5rem;
background-color: var(--progress-bar-color);
}
@@ -38,4 +53,14 @@ defineProps<{
gap: 0.5rem;
margin: 1.6em 0;
}
.label-container {
overflow: hidden;
}
.label {
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
}
</style>

View File

@@ -0,0 +1,37 @@
<template>
<UiTabBar>
<RouterTab :to="{ name: 'vm.dashboard', params: { uuid } }">
{{ $t("dashboard") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.console', params: { uuid } }">
{{ $t("console") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.alarms', params: { uuid } }">
{{ $t("alarms") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.stats', params: { uuid } }">
{{ $t("stats") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.system', params: { uuid } }">
{{ $t("system") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.network', params: { uuid } }">
{{ $t("network") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.storage', params: { uuid } }">
{{ $t("storage") }}
</RouterTab>
<RouterTab :to="{ name: 'vm.tasks', params: { uuid } }">
{{ $t("tasks") }}
</RouterTab>
</UiTabBar>
</template>
<script lang="ts" setup>
import RouterTab from "@/components/RouterTab.vue";
import UiTabBar from "@/components/ui/UiTabBar.vue";
defineProps<{
uuid: string;
}>();
</script>

View File

@@ -1,36 +1,71 @@
# Tooltip Directive
By default, tooltip will appear centered above the target element.
By default, the tooltip will appear centered above the target element.
## Directive argument
The directive argument can be either:
- The tooltip content
- An object containing the tooltip content and/or placement: `{ content: "...", placement: "..." }` (both optional)
## Tooltip content
The tooltip content can be either:
- `false` or an empty-string to disable the tooltip
- `true` or `undefined` to enable the tooltip and extract its content from the element's innerText.
- Non-empty string to enable the tooltip and use the string as content.
## Tooltip placement
Tooltip can be placed on the following positions:
- `top`
- `top-start`
- `top-end`
- `bottom`
- `bottom-start`
- `bottom-end`
- `left`
- `left-start`
- `left-end`
- `right`
- `right-start`
- `right-end`
## Usage
```vue
<template>
<!-- Static -->
<!-- Boolean / Undefined -->
<span v-tooltip="true"
>This content will be ellipsized by CSS but displayed entirely in the
tooltip</span
>
<span v-tooltip
>This content will be ellipsized by CSS but displayed entirely in the
tooltip</span
>
<!-- String -->
<span v-tooltip="'Tooltip content'">Item</span>
<!-- Dynamic -->
<span v-tooltip="myTooltipContent">Item</span>
<!-- Placement -->
<!-- Object -->
<span v-tooltip="{ content: 'Foobar', placement: 'left-end' }">Item</span>
<!-- Disabling (variable) -->
<span v-tooltip="{ content: 'Foobar', disabled: isDisabled }">Item</span>
<!-- Dynamic -->
<span v-tooltip="myTooltip">Item</span>
<!-- Disabling (function) -->
<span v-tooltip="{ content: 'Foobar', disabled: isDisabledFn }">Item</span>
<!-- Conditional -->
<span v-tooltip="isTooltipEnabled && 'Foobar'">Item</span>
</template>
<script setup>
import { ref } from "vue";
import { vTooltip } from "@/directives/tooltip.directive";
const myTooltipContent = ref("Content");
const isDisabled = ref(true);
const isDisabledFn = (target: Element) => {
// return boolean;
};
const myTooltip = ref("Content"); // or ref({ content: "Content", placement: "left-end" })
const isTooltipEnabled = ref(true);
</script>
```

View File

@@ -1,8 +1,36 @@
import type { Directive } from "vue";
import type { TooltipEvents, TooltipOptions } from "@/stores/tooltip.store";
import { useTooltipStore } from "@/stores/tooltip.store";
import { isObject } from "lodash-es";
import type { Options } from "placement.js";
import type { Directive } from "vue";
export const vTooltip: Directive<HTMLElement, TooltipOptions> = {
type TooltipDirectiveContent = undefined | boolean | string;
type TooltipDirectiveOptions =
| TooltipDirectiveContent
| {
content?: TooltipDirectiveContent;
placement?: Options["placement"];
};
const parseOptions = (
options: TooltipDirectiveOptions,
target: HTMLElement
): TooltipOptions => {
const { placement, content } = isObject(options)
? options
: { placement: undefined, content: options };
return {
placement,
content:
content === true || content === undefined
? target.innerText.trim()
: content,
};
};
export const vTooltip: Directive<HTMLElement, TooltipDirectiveOptions> = {
mounted(target, binding) {
const store = useTooltipStore();
@@ -10,11 +38,11 @@ export const vTooltip: Directive<HTMLElement, TooltipOptions> = {
? { on: "focusin", off: "focusout" }
: { on: "mouseenter", off: "mouseleave" };
store.register(target, binding.value, events);
store.register(target, parseOptions(binding.value, target), events);
},
updated(target, binding) {
const store = useTooltipStore();
store.updateOptions(target, binding.value);
store.updateOptions(target, parseOptions(binding.value, target));
},
beforeUnmount(target) {
const store = useTooltipStore();

View File

@@ -1,6 +1,5 @@
import { createI18n } from "vue-i18n";
import en from "@/locales/en.json";
import fr from "@/locales/fr.json";
import messages from "@intlify/unplugin-vue-i18n/messages";
interface Locales {
[key: string]: {
@@ -20,13 +19,10 @@ export const locales: Locales = {
},
};
export default createI18n<[typeof en], "en" | "fr">({
export default createI18n({
locale: localStorage.getItem("lang") ?? "en",
fallbackLocale: "en",
messages: {
en,
fr,
},
messages,
datetimeFormats: {
en: {
date_short: {

View File

@@ -1,13 +1,12 @@
import type {
RawObjectType,
RawXenApiRecord,
XenApiHost,
XenApiHostMetrics,
XenApiRecord,
XenApiVm,
} from "@/libs/xen-api";
import type { CollectionSubscription } from "@/stores/xapi-collection.store";
import type { Filter } from "@/types/filter";
import type { CollectionSubscription } from "@/types/xapi-collection";
import { faSquareCheck } from "@fortawesome/free-regular-svg-icons";
import { faFont, faHashtag, faList } from "@fortawesome/free-solid-svg-icons";
import { utcParse } from "d3-time-format";
@@ -71,8 +70,20 @@ export function parseDateTime(dateTime: string) {
return date.getTime();
}
export const hasEllipsis = (target: Element | undefined | null) =>
target != undefined && target.clientWidth < target.scrollWidth;
export const hasEllipsis = (
target: Element | undefined | null,
{ vertical = false }: { vertical?: boolean } = {}
) => {
if (target == null) {
return false;
}
if (vertical) {
return target.clientHeight < target.scrollHeight;
}
return target.clientWidth < target.scrollWidth;
};
export function percent(currentValue: number, maxValue: number, precision = 2) {
return round((currentValue / maxValue) * 100, precision);
@@ -171,15 +182,6 @@ export function parseRamUsage(
export const getFirst = <T>(value: T | T[]): T | undefined =>
Array.isArray(value) ? value[0] : value;
export function requireSubscription<T>(
subscription: T | undefined,
type: RawObjectType
): asserts subscription is T {
if (subscription === undefined) {
throw new Error(`You need to provide a ${type} subscription`);
}
}
export const isOperationsPending = (
obj: XenApiVm,
operations: string[] | string

View File

@@ -17,6 +17,7 @@
"coming-soon": "Coming soon!",
"community": "Community",
"community-name": "{name} community",
"console": "Console",
"copy": "Copy",
"cpu-provisioning": "CPU provisioning",
"cpu-usage": "CPU usage",

View File

@@ -17,6 +17,7 @@
"coming-soon": "Bientôt disponible !",
"community": "Communauté",
"community-name": "Communauté {name}",
"console": "Console",
"copy": "Copier",
"cpu-provisioning": "Provisionnement CPU",
"cpu-usage": "Utilisation CPU",

View File

@@ -1,12 +1,11 @@
import pool from "@/router/pool";
import vm from "@/router/vm";
import HomeView from "@/views/HomeView.vue";
import HostDashboardView from "@/views/host/HostDashboardView.vue";
import HostRootView from "@/views/host/HostRootView.vue";
import PageNotFoundView from "@/views/PageNotFoundView.vue";
import SettingsView from "@/views/settings/SettingsView.vue";
import StoryView from "@/views/StoryView.vue";
import VmConsoleView from "@/views/vm/VmConsoleView.vue";
import VmRootView from "@/views/vm/VmRootView.vue";
import storiesRoutes from "virtual:stories";
import { createRouter, createWebHashHistory } from "vue-router";
@@ -31,6 +30,7 @@ const router = createRouter({
component: SettingsView,
},
pool,
vm,
{
path: "/host/:uuid",
component: HostRootView,
@@ -42,17 +42,6 @@ const router = createRouter({
},
],
},
{
path: "/vm/:uuid",
component: VmRootView,
children: [
{
path: "console",
name: "vm.console",
component: VmConsoleView,
},
],
},
{
path: "/:pathMatch(.*)*",
name: "notFound",

View File

@@ -0,0 +1,47 @@
export default {
path: "/vm/:uuid",
component: () => import("@/views/vm/VmRootView.vue"),
redirect: { name: "vm.console" },
children: [
{
path: "dashboard",
name: "vm.dashboard",
component: () => import("@/views/vm/VmDashboardView.vue"),
},
{
path: "console",
name: "vm.console",
component: () => import("@/views/vm/VmConsoleView.vue"),
},
{
path: "alarms",
name: "vm.alarms",
component: () => import("@/views/vm/VmAlarmsView.vue"),
},
{
path: "stats",
name: "vm.stats",
component: () => import("@/views/vm/VmStatsView.vue"),
},
{
path: "system",
name: "vm.system",
component: () => import("@/views/vm/VmSystemView.vue"),
},
{
path: "network",
name: "vm.network",
component: () => import("@/views/vm/VmNetworkView.vue"),
},
{
path: "storage",
name: "vm.storage",
component: () => import("@/views/vm/VmStorageView.vue"),
},
{
path: "tasks",
name: "vm.tasks",
component: () => import("@/views/vm/VmTasksView.vue"),
},
],
};

View File

@@ -1,21 +1,28 @@
import {
isHostRunning,
requireSubscription,
sortRecordsByNameLabel,
} from "@/libs/utils";
import type { GRANULARITY } from "@/libs/xapi-stats";
import type { XenApiHostMetrics } from "@/libs/xen-api";
import {
type CollectionSubscription,
useXapiCollectionStore,
} from "@/stores/xapi-collection.store";
import { isHostRunning, sortRecordsByNameLabel } from "@/libs/utils";
import type { GRANULARITY, XapiStatsResponse } from "@/libs/xapi-stats";
import type { XenApiHost, XenApiHostMetrics } from "@/libs/xen-api";
import { useXapiCollectionStore } from "@/stores/xapi-collection.store";
import { useXenApiStore } from "@/stores/xen-api.store";
import type { CollectionSubscription } from "@/types/xapi-collection";
import { defineStore } from "pinia";
import { computed } from "vue";
import { computed, type ComputedRef } from "vue";
type SubscribeOptions = {
hostMetricsSubscription?: CollectionSubscription<XenApiHostMetrics>;
};
type MetricsSubscription = CollectionSubscription<XenApiHostMetrics>;
interface HostSubscribeOptions<M extends undefined | MetricsSubscription> {
hostMetricsSubscription?: M;
}
interface HostSubscription extends CollectionSubscription<XenApiHost> {
getStats: (
hostUuid: string,
granularity: GRANULARITY
) => Promise<XapiStatsResponse<any>>;
}
interface HostSubscriptionWithRunningHosts extends HostSubscription {
runningHosts: ComputedRef<XenApiHost[]>;
}
export const useHostStore = defineStore("host", () => {
const xenApiStore = useXenApiStore();
@@ -23,17 +30,19 @@ export const useHostStore = defineStore("host", () => {
hostCollection.setSort(sortRecordsByNameLabel);
const subscribe = ({ hostMetricsSubscription }: SubscribeOptions = {}) => {
function subscribe(
options?: HostSubscribeOptions<undefined>
): HostSubscription;
function subscribe(
options?: HostSubscribeOptions<MetricsSubscription>
): HostSubscriptionWithRunningHosts;
function subscribe({
hostMetricsSubscription,
}: HostSubscribeOptions<undefined | MetricsSubscription> = {}) {
const hostSubscription = hostCollection.subscribe();
const runningHosts = computed(() => {
requireSubscription(hostMetricsSubscription, "host_metrics");
return hostSubscription.records.value.filter((host) =>
isHostRunning(host, hostMetricsSubscription)
);
});
const getStats = (hostUuid: string, granularity: GRANULARITY) => {
const host = hostSubscription.getByUuid(hostUuid);
@@ -52,12 +61,26 @@ export const useHostStore = defineStore("host", () => {
});
};
return {
const subscription = {
...hostSubscription,
runningHosts,
getStats,
};
};
if (hostMetricsSubscription === undefined) {
return subscription;
}
const runningHosts = computed(() =>
hostSubscription.records.value.filter((host) =>
isHostRunning(host, hostMetricsSubscription)
)
);
return {
...subscription,
runningHosts,
};
}
return {
...hostCollection,

View File

@@ -4,13 +4,10 @@ import type { Options } from "placement.js";
import { type EffectScope, computed, effectScope, ref } from "vue";
import { type WindowEventName, useEventListener } from "@vueuse/core";
export type TooltipOptions =
| string
| {
content: string;
placement?: Options["placement"];
disabled?: boolean | ((target: HTMLElement) => boolean);
};
export type TooltipOptions = {
content: string | false;
placement: Options["placement"];
};
export type TooltipEvents = { on: WindowEventName; off: WindowEventName };

View File

@@ -5,7 +5,7 @@ import { computed, ref } from "vue";
export const useUiStore = defineStore("ui", () => {
const currentHostOpaqueRef = ref();
const colorMode = useColorMode({ emitAuto: true, initialValue: "dark" });
const { store: colorMode } = useColorMode({ initialValue: "dark" });
const { desktop: isDesktop } = useBreakpoints({
desktop: 1024,

View File

@@ -1,18 +1,30 @@
import { requireSubscription, sortRecordsByNameLabel } from "@/libs/utils";
import type { GRANULARITY } from "@/libs/xapi-stats";
import { sortRecordsByNameLabel } from "@/libs/utils";
import type { GRANULARITY, XapiStatsResponse } from "@/libs/xapi-stats";
import type { XenApiHost, XenApiVm } from "@/libs/xen-api";
import {
type CollectionSubscription,
useXapiCollectionStore,
} from "@/stores/xapi-collection.store";
import { useXapiCollectionStore } from "@/stores/xapi-collection.store";
import { useXenApiStore } from "@/stores/xen-api.store";
import type { CollectionSubscription } from "@/types/xapi-collection";
import { defineStore } from "pinia";
import { computed } from "vue";
import { computed, type ComputedRef } from "vue";
type SubscribeOptions = {
hostSubscription?: CollectionSubscription<XenApiHost>;
type HostSubscription = CollectionSubscription<XenApiHost>;
type VmSubscribeOptions<H extends undefined | HostSubscription> = {
hostSubscription?: H;
};
interface VmSubscription extends CollectionSubscription<XenApiVm> {
recordsByHostRef: ComputedRef<Map<string, XenApiVm[]>>;
runningVms: ComputedRef<XenApiVm[]>;
}
interface VmSubscriptionWithGetStats extends VmSubscription {
getStats: (
id: string,
granularity: GRANULARITY
) => Promise<XapiStatsResponse<any>>;
}
export const useVmStore = defineStore("vm", () => {
const vmCollection = useXapiCollectionStore().get("VM");
@@ -22,7 +34,15 @@ export const useVmStore = defineStore("vm", () => {
vmCollection.setSort(sortRecordsByNameLabel);
const subscribe = ({ hostSubscription }: SubscribeOptions = {}) => {
function subscribe(options?: VmSubscribeOptions<undefined>): VmSubscription;
function subscribe(
options?: VmSubscribeOptions<HostSubscription>
): VmSubscriptionWithGetStats;
function subscribe({
hostSubscription,
}: VmSubscribeOptions<undefined | HostSubscription> = {}) {
const vmSubscription = vmCollection.subscribe();
const recordsByHostRef = computed(() => {
@@ -43,9 +63,17 @@ export const useVmStore = defineStore("vm", () => {
vmSubscription.records.value.filter((vm) => vm.power_state === "Running")
);
const getStats = (id: string, granularity: GRANULARITY) => {
requireSubscription(hostSubscription, "host");
const subscription = {
...vmSubscription,
recordsByHostRef,
runningVms,
};
if (hostSubscription === undefined) {
return subscription;
}
const getStats = (id: string, granularity: GRANULARITY) => {
const xenApiStore = useXenApiStore();
if (!xenApiStore.isConnected) {
@@ -72,12 +100,10 @@ export const useVmStore = defineStore("vm", () => {
};
return {
...vmSubscription,
recordsByHostRef,
...subscription,
getStats,
runningVms,
};
};
}
return {
...vmCollection,

View File

@@ -1,20 +1,14 @@
import type {
RawObjectType,
XenApiConsole,
XenApiHost,
XenApiHostMetrics,
XenApiPool,
XenApiRecord,
XenApiSr,
XenApiTask,
XenApiVm,
XenApiVmGuestMetrics,
XenApiVmMetrics,
} from "@/libs/xen-api";
import type { RawObjectType, XenApiRecord } from "@/libs/xen-api";
import { useXenApiStore } from "@/stores/xen-api.store";
import type {
CollectionSubscription,
DeferredCollectionSubscription,
RawTypeToObject,
SubscribeOptions,
} from "@/types/xapi-collection";
import { tryOnUnmounted, whenever } from "@vueuse/core";
import { defineStore } from "pinia";
import { computed, type ComputedRef, readonly, type Ref, ref } from "vue";
import { computed, readonly, ref } from "vue";
export const useXapiCollectionStore = defineStore("xapiCollection", () => {
const collections = ref(
@@ -35,18 +29,6 @@ export const useXapiCollectionStore = defineStore("xapiCollection", () => {
return { get };
});
export interface CollectionSubscription<T extends XenApiRecord> {
records: ComputedRef<T[]>;
getByOpaqueRef: (opaqueRef: string) => T | undefined;
getByUuid: (uuid: string) => T | undefined;
hasUuid: (uuid: string) => boolean;
isReady: Readonly<Ref<boolean>>;
isFetching: Readonly<Ref<boolean>>;
isReloading: ComputedRef<boolean>;
hasError: ComputedRef<boolean>;
lastError: Readonly<Ref<string | undefined>>;
}
const createXapiCollection = <T extends XenApiRecord>(type: RawObjectType) => {
const isReady = ref(false);
const isFetching = ref(false);
@@ -123,16 +105,30 @@ const createXapiCollection = <T extends XenApiRecord>(type: RawObjectType) => {
() => fetchAll()
);
const subscribe = () => {
function subscribe(
options?: SubscribeOptions<true>
): CollectionSubscription<T>;
function subscribe(
options: SubscribeOptions<false>
): DeferredCollectionSubscription<T>;
function subscribe(
options: SubscribeOptions<boolean>
): CollectionSubscription<T> | DeferredCollectionSubscription<T>;
function subscribe({ immediate = true }: SubscribeOptions<boolean> = {}) {
const id = Symbol();
subscriptions.value.add(id);
if (immediate) {
subscriptions.value.add(id);
}
tryOnUnmounted(() => {
unsubscribe(id);
});
return {
const subscription = {
records,
getByOpaqueRef,
getByUuid,
@@ -143,7 +139,17 @@ const createXapiCollection = <T extends XenApiRecord>(type: RawObjectType) => {
hasError,
lastError: readonly(lastError),
};
};
if (immediate) {
return subscription;
}
return {
...subscription,
start: () => subscriptions.value.add(id),
isStarted: computed(() => subscriptions.value.has(id)),
};
}
const unsubscribe = (id: symbol) => subscriptions.value.delete(id);
@@ -158,59 +164,3 @@ const createXapiCollection = <T extends XenApiRecord>(type: RawObjectType) => {
setSort,
};
};
type RawTypeToObject = {
Bond: never;
Certificate: never;
Cluster: never;
Cluster_host: never;
DR_task: never;
Feature: never;
GPU_group: never;
PBD: never;
PCI: never;
PGPU: never;
PIF: never;
PIF_metrics: never;
PUSB: never;
PVS_cache_storage: never;
PVS_proxy: never;
PVS_server: never;
PVS_site: never;
SDN_controller: never;
SM: never;
SR: XenApiSr;
USB_group: never;
VBD: never;
VBD_metrics: never;
VDI: never;
VGPU: never;
VGPU_type: never;
VIF: never;
VIF_metrics: never;
VLAN: never;
VM: XenApiVm;
VMPP: never;
VMSS: never;
VM_guest_metrics: XenApiVmGuestMetrics;
VM_metrics: XenApiVmMetrics;
VUSB: never;
blob: never;
console: XenApiConsole;
crashdump: never;
host: XenApiHost;
host_cpu: never;
host_crashdump: never;
host_metrics: XenApiHostMetrics;
host_patch: never;
network: never;
network_sriov: never;
pool: XenApiPool;
pool_patch: never;
pool_update: never;
role: never;
secret: never;
subject: never;
task: XenApiTask;
tunnel: never;
};

View File

@@ -0,0 +1,11 @@
```vue-template
<FormInputGroup>
<FormInput />
<FormInput />
<FormSelect>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</FormSelect>
</FormInputGroup>
```

View File

@@ -0,0 +1,23 @@
<template>
<ComponentStory
:params="[slot().help('Can contains multiple FormInput and FormSelect')]"
>
<FormInputGroup>
<FormInput />
<FormInput />
<FormSelect>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</FormSelect>
</FormInputGroup>
</ComponentStory>
</template>
<script lang="ts" setup>
import ComponentStory from "@/components/component-story/ComponentStory.vue";
import FormInput from "@/components/form/FormInput.vue";
import FormInputGroup from "@/components/form/FormInputGroup.vue";
import FormSelect from "@/components/form/FormSelect.vue";
import { slot } from "@/libs/story/story-param";
</script>

Some files were not shown because too many files have changed in this diff Show More