Compare commits

..

86 Commits

Author SHA1 Message Date
Florent Beauchamp
42ddadac7b feat(xo-web/backups): improve incremental/key backup display
when the backup is completly composed of  key or completly composed of differencing
backup, the number of disk can be hard to understand
In this case, this PR will only show the number if the backup is mixed
2023-12-20 10:55:14 +00:00
Mathieu
41ed5625be fix(xo-server/RPU): correctly migrate VMs to their original host (#7238)
See zamma#19772
Introduced by bea771
2023-12-19 19:22:50 +01:00
Mathieu
e66bcf2a5c feat(xapi/VDI_exportContent): create XAPI task during NBD export (#7228)
See zammad#19003
2023-12-19 19:16:44 +01:00
b-Nollet
c40e71ed49 fix(xo-web/backup): fix reportWhen being reset to undefined (#7235)
See Zammad#17506

Preventing `reportWhen` value from becoming undefined (and resetting to the
default value _Failure_) when a mirror backup is edited without modifying this
field.
2023-12-19 17:08:19 +01:00
Florent BEAUCHAMP
439c721472 refactor(backups/cleanVm) use of merge parameter 2023-12-19 15:42:59 +01:00
Florent Beauchamp
99429edf23 test(backups/cleanVm): fix after changes 2023-12-19 15:42:59 +01:00
Florent Beauchamp
cec8237a47 feat(xo-web): show if an incremental backup is a delta or a full 2023-12-19 15:42:59 +01:00
Florent Beauchamp
e13d55bfa9 feat(backups): store if disks of incremental backups are differencing in metadata 2023-12-19 15:42:59 +01:00
Florent Beauchamp
141c141516 refactor(backups): differentialVhds to isVhdDifferencing 2023-12-19 15:42:59 +01:00
Florent Beauchamp
7a47d23191 feat(backups): show warning message if xva are truncated 2023-12-19 15:42:59 +01:00
Florent Beauchamp
7a8bf671fb feat(backups): put back warning message on truncated vhds 2023-12-19 15:42:59 +01:00
Florent Beauchamp
7f83a3e55e fix(backup, vhd-lib): merge now return the resulting size and merged size 2023-12-19 15:42:59 +01:00
Florent Beauchamp
7f8ab07692 fix(backup): transferred backup size for incremental backup 2023-12-19 15:42:59 +01:00
Julien Fontanet
2634008a6a docs(VM backups): fix settings link 2023-12-19 15:41:39 +01:00
Florent Beauchamp
4c652a457f fixup! refactor(nbd-client): remove unused method and default iterator 2023-12-19 15:28:32 +01:00
Florent Beauchamp
89dc40a1c5 fixup! feat(nbd-client,xapi): implement multiple connections NBD 2023-12-19 15:28:32 +01:00
Florent Beauchamp
04a7982801 test(nbd-client): add test for unaligned export size 2023-12-19 15:28:32 +01:00
Florent Beauchamp
df9b59f980 fix(nbd-client): better handling of multiple disconnection 2023-12-19 15:28:32 +01:00
Florent Beauchamp
fe215a53af feat(nbd-client): expose exportSize on multi nbd client 2023-12-19 15:28:32 +01:00
Florent Beauchamp
0559c843c4 fix(nbd-client): put back retry on block read 2023-12-19 15:28:32 +01:00
Florent Beauchamp
79967e0eec feat(nbd-client/multi): allow partial connexion 2023-12-19 15:28:32 +01:00
Florent Beauchamp
847ad63c09 feat(backups,xo-server,xo-web): make NBD concurrency configurable 2023-12-19 15:28:32 +01:00
Florent Beauchamp
fc1357db93 refactor(nbd-client): remove unused method and default iterator 2023-12-19 15:28:32 +01:00
Florent Beauchamp
b644cbe28d feat(nbd-client,xapi): implement multiple connections NBD 2023-12-19 15:28:32 +01:00
Florent Beauchamp
7ddfb2a684 fix(vhd-lib/createStreamNbd): create binary instead of object stream
This reduces memory consumption.
2023-12-19 15:28:32 +01:00
Julien Fontanet
5a0cfd86c7 fix(xo-server/remote#_unserialize): handle boolean enabled
Introduced by 32afd5c46

Fixes #7246
Fixes https://xcp-ng.org/forum/post/68575
2023-12-19 15:11:34 +01:00
Julien Fontanet
70e3ba17af fix(CHANGELOG.unreleased): metadata → mirror
Introduced by 2b973275c
2023-12-19 15:07:01 +01:00
Julien Fontanet
4784bbfb99 fix(xo-server/vm.migrate): dont hide original error 2023-12-18 15:02:51 +01:00
MlssFrncJrg
ceddddd7f2 fix(xo-server/PIF): call PIF.forget to delete PIF (#7221)
Fixes #7193

Calling `pif.destroy` generates PIF_IS_PHYSICAL error when the PIF has been
physically removed
2023-12-18 14:41:58 +01:00
Julien Fontanet
32afd5c463 feat(xo-server): save JSON records in Redis (#7113)
Better support of non-string properties without having to handle them one by one.

To avoid breaking compatibility with older versions, this should not be merged after reading JSON records has been supported for at least a month.
2023-12-18 14:26:51 +01:00
Julien Fontanet
ac391f6a0f docs(incremental_backups): fix image path
Introduced by a0b50b47e
2023-12-18 11:45:23 +01:00
Manon Mercier
a0b50b47ef docs(incremental_backups): explain NBD (#7240) 2023-12-18 11:41:20 +01:00
b-Nollet
e3618416bf fix(xo-server-transport-slack): slack-node → @slack/webhook (#7220)
Fixes #7130

Changing the package used for Slack webhooks, to ensure compatibility with slack-compatible services like Discord or Mattermost.
2023-12-18 10:56:54 +01:00
Julien Fontanet
37fd6d13db chore(xo-server-transport-email): dont use deprecated nodemailer-markdown
BREAKING CHANGE: now requires Node >=18.'

Permits the use of a more recent `marked`.
2023-12-17 11:56:18 +01:00
Julien Fontanet
eb56666f98 chore(xen-api,xo-server): update to proxy-agent@6.3.1 2023-12-16 16:41:10 +01:00
Julien Fontanet
b7daee81c0 feat(xo-server): http.useForwardedHeaders (#7233)
Fixes https://xcp-ng.org/forum/post/67625

This setting can be enabled when XO is behind a reverse proxy to
fetch clients IP addresses from `X-Forwarded-*` headers.
2023-12-16 12:34:58 +01:00
Julien Fontanet
bee0eb9091 chore: update dev deps 2023-12-15 14:09:18 +01:00
Julien Fontanet
59a9a63971 feat(xo-server/store): ensure leveldb only accessible to current user 2023-12-13 11:36:31 +01:00
Julien Fontanet
a2e8b999da feat(xo-server-auth-saml): forceAuthn setting (#7232)
Fixes https://xcp-ng.org/forum/post/67764
2023-12-13 11:25:16 +01:00
OlivierFL
489ad51b4d feat(lite): add new UiStatusPanel component (#7227) 2023-12-12 11:44:22 +01:00
Julien Fontanet
7db2516a38 chore: update dev deps 2023-12-12 10:30:11 +01:00
Julien Fontanet
1141ef524f fix(xapi/host_smartReboot): retries when HOST_STILL_BOOTING (#7231)
Fixes #7194
2023-12-11 16:04:43 +01:00
OlivierFL
f449258ed3 feat(lite): add indeterminate state on FormToggle component (#7230) 2023-12-11 14:48:24 +01:00
Julien Fontanet
bb3b83c690 fix(xo-server/rest-api): proper 404 in case of missing backup job 2023-12-08 15:19:48 +01:00
Julien Fontanet
2b973275c0 feat(xo-server/rest-api): expose metadata & mirror backup jobs 2023-12-08 15:17:51 +01:00
Julien Fontanet
037e1c1dfa feat(xo-server/rest-api): /backups → /backup 2023-12-08 15:14:06 +01:00
Julien Fontanet
f0da94081b feat(gen-deps-list): detect duplicate packages
Prevents a bug where a second entry would override the previous one and possibly
decrease the release type (e.g. `major + patch → patch`).
2023-12-07 17:15:09 +01:00
Julien Fontanet
cd44a6e28c feat(eslint): enable require-atomic-updates rule 2023-12-07 17:05:21 +01:00
Julien Fontanet
70b09839c7 chore(xo-server): use @xen-orchestra/xapi/VM_import when possible 2023-12-07 16:50:45 +01:00
OlivierFL
12140143d2 feat(lite): added tooltip on CPU provisioning warning icon (#7223) 2023-12-07 09:07:15 +01:00
b-Nollet
e68236c9f2 docs(installation): update Debian & Fedora packages (#7207)
Fixes #7095
2023-12-06 15:39:50 +01:00
Julien Fontanet
8a1a0d76f7 chore: update dev deps 2023-12-06 11:09:54 +01:00
Mathieu
4a5bc5dccc feat(lite): override host address with 'master' query param (#7187) 2023-12-04 11:31:35 +01:00
MlssFrncJrg
0ccdfbd6f4 feat(xo-web/SR): improve forget SR modal message (#7155) 2023-12-04 09:33:50 +01:00
Mathieu
75af7668b5 fix(lite/changelog): fix xolite changelog (#7215) 2023-12-01 10:48:22 +01:00
Thierry Goettelmann
0b454fa670 feat(lite/VM): ability to migrate a VM (#7164) 2023-12-01 10:38:55 +01:00
Pierre Donias
2dcb5cb7cd feat(lite): 0.1.6 (#7213) 2023-11-30 16:01:06 +01:00
Thierry Goettelmann
a5aeeceb7f chore(lite): upgrade dependencies (#7170)
1. Since the project is built-only, all deps have been moved to `devDependencies`
2. TypeScript has been upgraded from 4.9 to 5.2
3. `engines.node` requirement was set to `>=8.10`. It has been updated to `>=18` to be aligned with deps requirements
2023-11-30 15:13:36 +01:00
Florent BEAUCHAMP
b2f2c3cbc4 feat: release 5.89.0 (#7212) 2023-11-30 13:57:49 +01:00
Florent BEAUCHAMP
0f7ac004ad feat: technical release (#7211) 2023-11-30 10:42:54 +01:00
Florent Beauchamp
7faa82a9c8 feat(xo-web): add UX for differential backup 2023-11-30 10:12:20 +01:00
Florent Beauchamp
4b3f60b280 feat(backups): implement differential restore
When restoring a backup, try to reuse data from an existing snapshot.
We use this snasphot and apply a reverse differential of the changes
between the backup and the snapshot

Prerequisite : a uninterrupted delta chain from the backup being
restored to a backup that still have its snapshot on the host
2023-11-30 10:12:20 +01:00
Florent Beauchamp
b29d5ba95c feat(vhd-lib): implement a limit in VhdSynthetic.fromVhdChain 2023-11-30 10:12:20 +01:00
Florent Beauchamp
408fc5af84 feat(vhd-lib): implement VhdNegative
it's a virtual Vhd that contains all the changes to be applied
to reset a child to its parent value
2023-11-30 10:12:20 +01:00
Florent BEAUCHAMP
2748aea4e9 feat: technical release (#7210) 2023-11-29 15:39:25 +01:00
Florent Beauchamp
a5acc7d267 fix(backups,xo-server): don't backup VMs created by Health Check 2023-11-29 14:46:05 +01:00
Florent Beauchamp
87a9fbe237 feat(xo-server,xo-web): don't backup VMs with xo:no-bak tag 2023-11-29 14:46:05 +01:00
Julien Fontanet
9d0b7242f0 fix(xapi-explore-sr): use xen-api@2.0.0 2023-11-29 14:42:50 +01:00
Julien Fontanet
20ec44c3b3 fix(xo-server/registerHttpRequestHandler): match on path
Instead of full URL, so that handlers can manage query string.
2023-11-29 14:42:50 +01:00
Julien Fontanet
6f68456bae feat(xo-server/registerHttpRequestHandler): returns teardown function
`unregisterHttpRequestHandler` is no longer useful and has been removed.
2023-11-29 14:42:50 +01:00
Florent BEAUCHAMP
b856c1a6b4 feat(xo-server,xo-web): show link to the SR for the garbage collector (coalesce) task (#7189)
See https://github.com/vatesfr/xen-orchestra/issues/5379#issuecomment-1765170973
2023-11-29 09:07:05 +01:00
Julien Fontanet
61e1f83a9f feat(xo-server/rest-api): possibility to import a VM 2023-11-28 17:54:10 +01:00
Mathieu
5820e19731 feat(xo-web/VM): display task information on VDI import (#7197) 2023-11-28 15:41:10 +01:00
Pierre Donias
cdb51f8fe3 chore(lite/settings): use FormSelect instead of select (#7206) 2023-11-28 14:46:25 +01:00
Florent BEAUCHAMP
57940e0a52 fix(backups): import on non default SR (#7209) 2023-11-28 14:35:08 +01:00
Florent BEAUCHAMP
6cc95efe51 feat: technical release (#7208) 2023-11-28 09:30:32 +01:00
Pierre Donias
b0ff2342ab chore(netbox): remove null-indexed entries from keyed-by collections (#7156) 2023-11-27 16:26:53 +01:00
Mathieu
0f67692be4 feat(xo-server/xostor): add XO tasks (#7201) 2023-11-27 16:11:53 +01:00
Julien Fontanet
865461bfb9 feat(xo-server/api): backupNg.{,un}mountPartition (#7176)
Manual method to mount a backup partition on the XOA.
2023-11-24 09:47:23 +01:00
Julien Fontanet
e108cb0990 feat(xo-server/rest-api): possibility to import in an existing VDI (#7199) 2023-11-23 17:07:40 +01:00
Florent BEAUCHAMP
c4535c6bae fix(fs/s3): enable md5 if object lock status is unknown (#7195)
From https://xcp-ng.org/forum/topic/7939/unable-to-connect-to-backblaze-b2/7?_=1700572613725
Following 796e2ab674 

User report it fixes the issue https://xcp-ng.org/forum/post/67633
2023-11-23 16:43:25 +01:00
Julien Fontanet
ad8eaaa771 feat(xo-cli): support REST PUT method 2023-11-23 16:30:03 +01:00
Julien Fontanet
9419cade3d feat(xo-server/rest-api): tags property can be updated 2023-11-23 16:30:03 +01:00
Julien Fontanet
272e6422bd chore(xapi/VM_import): typo snapshots → snapshot 2023-11-23 16:28:30 +01:00
Julien Fontanet
547908a8f9 chore(xo-server/proxy.checkHealth): call checkProxyHealth 2023-11-23 16:28:29 +01:00
Mathieu
8abfaa0bd5 feat(lite/VM): ability to export a VM (#7190) 2023-11-23 11:00:38 +01:00
157 changed files with 8960 additions and 2757 deletions

View File

@@ -68,6 +68,11 @@ module.exports = {
'no-console': ['error', { allow: ['warn', 'error'] }],
// this rule can prevent race condition bugs like parallel `a += await foo()`
//
// as it has a lots of false positive, it is only enabled as a warning for now
'require-atomic-updates': 'warn',
strict: 'error',
},
}

View File

@@ -14,7 +14,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.4",
"version": "0.1.5",
"engines": {
"node": ">=8.10"
},
@@ -23,7 +23,7 @@
"test": "node--test"
},
"dependencies": {
"@vates/multi-key-map": "^0.1.0",
"@vates/multi-key-map": "^0.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"ensure-array": "^1.0.0"

View File

@@ -22,7 +22,7 @@
"fuse-native": "^2.2.6",
"lru-cache": "^7.14.0",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.6.1"
"vhd-lib": "^4.7.0"
},
"scripts": {
"postversion": "npm publish --access public"

View File

@@ -17,4 +17,14 @@ map.get(['foo', 'bar']) // 2
map.get(['bar', 'foo']) // 3
map.get([OBJ]) // 4
map.get([{}]) // undefined
map.delete([])
for (const [key, value] of map.entries() {
console.log(key, value)
}
for (const value of map.values()) {
console.log(value)
}
```

View File

@@ -35,6 +35,16 @@ map.get(['foo', 'bar']) // 2
map.get(['bar', 'foo']) // 3
map.get([OBJ]) // 4
map.get([{}]) // undefined
map.delete([])
for (const [key, value] of map.entries() {
console.log(key, value)
}
for (const value of map.values()) {
console.log(value)
}
```
## Contributions

View File

@@ -36,6 +36,23 @@ function del(node, i, keys) {
return node
}
function* entries(node, key) {
if (node !== undefined) {
if (node instanceof Node) {
const { value } = node
if (value !== undefined) {
yield [key, node.value]
}
for (const [childKey, child] of node.children.entries()) {
yield* entries(child, key.concat(childKey))
}
} else {
yield [key, node]
}
}
}
function get(node, i, keys) {
return i === keys.length
? node instanceof Node
@@ -69,6 +86,22 @@ function set(node, i, keys, value) {
return node
}
function* values(node) {
if (node !== undefined) {
if (node instanceof Node) {
const { value } = node
if (value !== undefined) {
yield node.value
}
for (const child of node.children.values()) {
yield* values(child)
}
} else {
yield node
}
}
}
exports.MultiKeyMap = class MultiKeyMap {
constructor() {
// each node is either a value or a Node if it contains children
@@ -79,6 +112,10 @@ exports.MultiKeyMap = class MultiKeyMap {
this._root = del(this._root, 0, keys)
}
entries() {
return entries(this._root, [])
}
get(keys) {
return get(this._root, 0, keys)
}
@@ -86,4 +123,8 @@ exports.MultiKeyMap = class MultiKeyMap {
set(keys, value) {
this._root = set(this._root, 0, keys, value)
}
values() {
return values(this._root)
}
}

View File

@@ -19,7 +19,7 @@ describe('MultiKeyMap', () => {
// reverse composite key
['bar', 'foo'],
]
const values = keys.map(() => ({}))
const values = keys.map(() => Math.random())
// set all values first to make sure they are all stored and not only the
// last one
@@ -27,6 +27,12 @@ describe('MultiKeyMap', () => {
map.set(key, values[i])
})
assert.deepEqual(
Array.from(map.entries()),
keys.map((key, i) => [key, values[i]])
)
assert.deepEqual(Array.from(map.values()), values)
keys.forEach((key, i) => {
// copy the key to make sure the array itself is not the key
assert.strictEqual(map.get(key.slice()), values[i])

View File

@@ -18,7 +18,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "0.1.0",
"version": "0.2.0",
"engines": {
"node": ">=8.10"
},

View File

@@ -4,7 +4,6 @@ import { connect } from 'node:tls'
import { fromCallback, pRetry, pDelay, pTimeout, pFromCallback } from 'promise-toolbox'
import { readChunkStrict } from '@vates/read-chunk'
import { createLogger } from '@xen-orchestra/log'
import {
INIT_PASSWD,
NBD_CMD_READ,
@@ -21,8 +20,6 @@ import {
OPTS_MAGIC,
NBD_CMD_DISC,
} from './constants.mjs'
import { Readable } from 'node:stream'
const { warn } = createLogger('vates:nbd-client')
// documentation is here : https://github.com/NetworkBlockDevice/nbd/blob/master/doc/proto.md
@@ -125,6 +122,8 @@ export default class NbdClient {
if (!this.#connected) {
return
}
this.#connected = false
const socket = this.#serverSocket
const queryId = this.#nextCommandQueryId
this.#nextCommandQueryId++
@@ -137,12 +136,12 @@ export default class NbdClient {
buffer.writeBigUInt64BE(0n, 16)
buffer.writeInt32BE(0, 24)
const promise = pFromCallback(cb => {
this.#serverSocket.end(buffer, 'utf8', cb)
socket.end(buffer, 'utf8', cb)
})
try {
await pTimeout.call(promise, this.#messageTimeout)
} catch (error) {
this.#serverSocket.destroy()
socket.destroy()
}
this.#serverSocket = undefined
this.#connected = false
@@ -290,7 +289,7 @@ export default class NbdClient {
}
}
async readBlock(index, size = NBD_DEFAULT_BLOCK_SIZE) {
async #readBlock(index, size) {
// we don't want to add anything in backlog while reconnecting
if (this.#reconnectingPromise) {
await this.#reconnectingPromise
@@ -338,57 +337,13 @@ export default class NbdClient {
})
}
async *readBlocks(indexGenerator = 2 * 1024 * 1024) {
// default : read all blocks
if (typeof indexGenerator === 'number') {
const exportSize = Number(this.#exportSize)
const chunkSize = indexGenerator
indexGenerator = function* () {
const nbBlocks = Math.ceil(exportSize / chunkSize)
for (let index = 0; index < nbBlocks; index++) {
yield { index, size: chunkSize }
}
}
}
const readAhead = []
const readAheadMaxLength = this.#readAhead
const makeReadBlockPromise = (index, size) => {
const promise = pRetry(() => this.readBlock(index, size), {
tries: this.#readBlockRetries,
onRetry: async err => {
warn('will retry reading block ', index, err)
await this.reconnect()
},
})
// error is handled during unshift
promise.catch(() => {})
return promise
}
// read all blocks, but try to keep readAheadMaxLength promise waiting ahead
for (const { index, size } of indexGenerator()) {
// stack readAheadMaxLength promises before starting to handle the results
if (readAhead.length === readAheadMaxLength) {
// any error will stop reading blocks
yield readAhead.shift()
}
readAhead.push(makeReadBlockPromise(index, size))
}
while (readAhead.length > 0) {
yield readAhead.shift()
}
}
stream(chunkSize) {
async function* iterator() {
for await (const chunk of this.readBlocks(chunkSize)) {
yield chunk
}
}
// create a readable stream instead of returning the iterator
// since iterators don't like unshift and partial reading
return Readable.from(iterator())
async readBlock(index, size = NBD_DEFAULT_BLOCK_SIZE) {
return pRetry(() => this.#readBlock(index, size), {
tries: this.#readBlockRetries,
onRetry: async err => {
warn('will retry reading block ', index, err)
await this.reconnect()
},
})
}
}

View File

@@ -0,0 +1,87 @@
import { asyncEach } from '@vates/async-each'
import { NBD_DEFAULT_BLOCK_SIZE } from './constants.mjs'
import NbdClient from './index.mjs'
import { createLogger } from '@xen-orchestra/log'
const { warn } = createLogger('vates:nbd-client:multi')
export default class MultiNbdClient {
#clients = []
#readAhead
get exportSize() {
return this.#clients[0].exportSize
}
constructor(settings, { nbdConcurrency = 8, readAhead = 16, ...options } = {}) {
this.#readAhead = readAhead
if (!Array.isArray(settings)) {
settings = [settings]
}
for (let i = 0; i < nbdConcurrency; i++) {
this.#clients.push(
new NbdClient(settings[i % settings.length], { ...options, readAhead: Math.ceil(readAhead / nbdConcurrency) })
)
}
}
async connect() {
const connectedClients = []
for (const clientId in this.#clients) {
const client = this.#clients[clientId]
try {
await client.connect()
connectedClients.push(client)
} catch (err) {
client.disconnect().catch(() => {})
warn(`can't connect to one nbd client`, { err })
}
}
if (connectedClients.length === 0) {
throw new Error(`Fail to connect to any Nbd client`)
}
if (connectedClients.length < this.#clients.length) {
warn(
`incomplete connection by multi Nbd, only ${connectedClients.length} over ${
this.#clients.length
} expected clients`
)
this.#clients = connectedClients
}
}
async disconnect() {
await asyncEach(this.#clients, client => client.disconnect(), {
stopOnError: false,
})
}
async readBlock(index, size = NBD_DEFAULT_BLOCK_SIZE) {
const clientId = index % this.#clients.length
return this.#clients[clientId].readBlock(index, size)
}
async *readBlocks(indexGenerator) {
// default : read all blocks
const readAhead = []
const makeReadBlockPromise = (index, size) => {
const promise = this.readBlock(index, size)
// error is handled during unshift
promise.catch(() => {})
return promise
}
// read all blocks, but try to keep readAheadMaxLength promise waiting ahead
for (const { index, size } of indexGenerator()) {
// stack readAheadMaxLength promises before starting to handle the results
if (readAhead.length === this.#readAhead) {
// any error will stop reading blocks
yield readAhead.shift()
}
readAhead.push(makeReadBlockPromise(index, size))
}
while (readAhead.length > 0) {
yield readAhead.shift()
}
}
}

View File

@@ -13,7 +13,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "2.0.0",
"version": "2.0.1",
"engines": {
"node": ">=14.0"
},
@@ -24,7 +24,7 @@
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"promise-toolbox": "^0.21.0",
"xen-api": "^1.3.6"
"xen-api": "^2.0.0"
},
"devDependencies": {
"tap": "^16.3.0",

View File

@@ -1,4 +1,3 @@
import NbdClient from '../index.mjs'
import { spawn, exec } from 'node:child_process'
import fs from 'node:fs/promises'
import { test } from 'tap'
@@ -7,8 +6,10 @@ import { pFromCallback } from 'promise-toolbox'
import { Socket } from 'node:net'
import { NBD_DEFAULT_PORT } from '../constants.mjs'
import assert from 'node:assert'
import MultiNbdClient from '../multi.mjs'
const FILE_SIZE = 10 * 1024 * 1024
const CHUNK_SIZE = 1024 * 1024 // non default size
const FILE_SIZE = 1024 * 1024 * 9.5 // non aligned file size
async function createTempFile(size) {
const tmpPath = await pFromCallback(cb => tmp.file(cb))
@@ -81,7 +82,7 @@ test('it works with unsecured network', async tap => {
const path = await createTempFile(FILE_SIZE)
let nbdServer = await spawnNbdKit(path)
const client = new NbdClient(
const client = new MultiNbdClient(
{
address: '127.0.0.1',
exportname: 'MY_SECRET_EXPORT',
@@ -109,13 +110,13 @@ CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
`,
},
{
nbdConcurrency: 1,
readAhead: 2,
}
)
await client.connect()
tap.equal(client.exportSize, BigInt(FILE_SIZE))
const CHUNK_SIZE = 1024 * 1024 // non default size
const indexes = []
for (let i = 0; i < FILE_SIZE / CHUNK_SIZE; i++) {
indexes.push(i)
@@ -127,9 +128,9 @@ CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
})
let i = 0
for await (const block of nbdIterator) {
let blockOk = true
let blockOk = block.length === Math.min(CHUNK_SIZE, FILE_SIZE - CHUNK_SIZE * i)
let firstFail
for (let j = 0; j < CHUNK_SIZE; j += 4) {
for (let j = 0; j < block.length; j += 4) {
const wanted = i * CHUNK_SIZE + j
const found = block.readUInt32BE(j)
blockOk = blockOk && found === wanted
@@ -137,7 +138,7 @@ CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
firstFail = j
}
}
tap.ok(blockOk, `check block ${i} content`)
tap.ok(blockOk, `check block ${i} content ${block.length}`)
i++
// flaky server is flaky
@@ -147,17 +148,6 @@ CYu1Xn/FVPx1HoRgWc7E8wFhDcA/P3SJtfIQWHB9FzSaBflKGR4t8WCE2eE8+cTB
nbdServer = await spawnNbdKit(path)
}
}
// we can reuse the conneciton to read other blocks
// default iterator
const nbdIteratorWithDefaultBlockIterator = client.readBlocks()
let nb = 0
for await (const block of nbdIteratorWithDefaultBlockIterator) {
nb++
tap.equal(block.length, 2 * 1024 * 1024)
}
tap.equal(nb, 5)
assert.rejects(() => client.readBlock(100, CHUNK_SIZE))
await client.disconnect()

View File

@@ -7,8 +7,8 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.43.2",
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/backups": "^0.44.2",
"@xen-orchestra/fs": "^4.1.3",
"filenamify": "^6.0.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
@@ -27,7 +27,7 @@
"scripts": {
"postversion": "npm publish --access public"
},
"version": "1.0.13",
"version": "1.0.14",
"license": "AGPL-3.0-or-later",
"author": {
"name": "Vates SAS",

View File

@@ -4,7 +4,14 @@ import { formatFilenameDate } from './_filenameDate.mjs'
import { importIncrementalVm } from './_incrementalVm.mjs'
import { Task } from './Task.mjs'
import { watchStreamSize } from './_watchStreamSize.mjs'
import { VhdNegative, VhdSynthetic } from 'vhd-lib'
import { decorateClass } from '@vates/decorate-with'
import { createLogger } from '@xen-orchestra/log'
import { dirname, join } from 'node:path'
import pickBy from 'lodash/pickBy.js'
import { defer } from 'golike-defer'
const { debug, info, warn } = createLogger('xo:backups:importVmBackup')
async function resolveUuid(xapi, cache, uuid, type) {
if (uuid == null) {
return uuid
@@ -16,26 +23,199 @@ async function resolveUuid(xapi, cache, uuid, type) {
return cache.get(uuid)
}
export class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
constructor({
adapter,
metadata,
srUuid,
xapi,
settings: { additionnalVmTag, newMacAddresses, mapVdisSrs = {}, useDifferentialRestore = false } = {},
}) {
this._adapter = adapter
this._importIncrementalVmSettings = { newMacAddresses, mapVdisSrs }
this._importIncrementalVmSettings = { additionnalVmTag, newMacAddresses, mapVdisSrs, useDifferentialRestore }
this._metadata = metadata
this._srUuid = srUuid
this._xapi = xapi
}
async #decorateIncrementalVmMetadata(backup) {
async #getPathOfVdiSnapshot(snapshotUuid) {
const metadata = this._metadata
if (this._pathToVdis === undefined) {
const backups = await this._adapter.listVmBackups(
this._metadata.vm.uuid,
({ mode, timestamp }) => mode === 'delta' && timestamp >= metadata.timestamp
)
const map = new Map()
for (const backup of backups) {
for (const [vdiRef, vdi] of Object.entries(backup.vdis)) {
map.set(vdi.uuid, backup.vhds[vdiRef])
}
}
this._pathToVdis = map
}
return this._pathToVdis.get(snapshotUuid)
}
async _reuseNearestSnapshot($defer, ignoredVdis) {
const metadata = this._metadata
const { mapVdisSrs } = this._importIncrementalVmSettings
const { vbds, vhds, vifs, vm, vmSnapshot } = metadata
const streams = {}
const metdataDir = dirname(metadata._filename)
const vdis = ignoredVdis === undefined ? metadata.vdis : pickBy(metadata.vdis, vdi => !ignoredVdis.has(vdi.uuid))
for (const [vdiRef, vdi] of Object.entries(vdis)) {
const vhdPath = join(metdataDir, vhds[vdiRef])
let xapiDisk
try {
xapiDisk = await this._xapi.getRecordByUuid('VDI', vdi.$snapshot_of$uuid)
} catch (err) {
// if this disk is not present anymore, fall back to default restore
warn(err)
}
let snapshotCandidate, backupCandidate
if (xapiDisk !== undefined) {
debug('found disks, wlll search its snapshots', { snapshots: xapiDisk.snapshots })
for (const snapshotRef of xapiDisk.snapshots) {
const snapshot = await this._xapi.getRecord('VDI', snapshotRef)
debug('handling snapshot', { snapshot })
// take only the first snapshot
if (snapshotCandidate && snapshotCandidate.snapshot_time < snapshot.snapshot_time) {
debug('already got a better candidate')
continue
}
// have a corresponding backup more recent than metadata ?
const pathToSnapshotData = await this.#getPathOfVdiSnapshot(snapshot.uuid)
if (pathToSnapshotData === undefined) {
debug('no backup linked to this snaphot')
continue
}
if (snapshot.$SR.uuid !== (mapVdisSrs[vdi.$snapshot_of$uuid] ?? this._srUuid)) {
debug('not restored on the same SR', { snapshotSr: snapshot.$SR.uuid, mapVdisSrs, srUuid: this._srUuid })
continue
}
debug('got a candidate', pathToSnapshotData)
snapshotCandidate = snapshot
backupCandidate = pathToSnapshotData
}
}
let stream
const backupWithSnapshotPath = join(metdataDir, backupCandidate ?? '')
if (vhdPath === backupWithSnapshotPath) {
// all the data are already on the host
debug('direct reuse of a snapshot')
stream = null
vdis[vdiRef].baseVdi = snapshotCandidate
// go next disk , we won't use this stream
continue
}
let disposableDescendants
const disposableSynthetic = await VhdSynthetic.fromVhdChain(this._adapter._handler, vhdPath)
// this will also clean if another disk of this VM backup fails
// if user really only need to restore non failing disks he can retry with ignoredVdis
let disposed = false
const disposeOnce = async () => {
if (!disposed) {
disposed = true
try {
await disposableDescendants?.dispose()
await disposableSynthetic?.dispose()
} catch (error) {
warn('openVhd: failed to dispose VHDs', { error })
}
}
}
$defer.onFailure(() => disposeOnce())
const parentVhd = disposableSynthetic.value
await parentVhd.readBlockAllocationTable()
debug('got vhd synthetic of parents', parentVhd.length)
if (snapshotCandidate !== undefined) {
try {
debug('will try to use differential restore', {
backupWithSnapshotPath,
vhdPath,
vdiRef,
})
disposableDescendants = await VhdSynthetic.fromVhdChain(this._adapter._handler, backupWithSnapshotPath, {
until: vhdPath,
})
const descendantsVhd = disposableDescendants.value
await descendantsVhd.readBlockAllocationTable()
debug('got vhd synthetic of descendants')
const negativeVhd = new VhdNegative(parentVhd, descendantsVhd)
debug('got vhd negative')
// update the stream with the negative vhd stream
stream = await negativeVhd.stream()
vdis[vdiRef].baseVdi = snapshotCandidate
} catch (err) {
// can be a broken VHD chain, a vhd chain with a key backup, ....
// not an irrecuperable error, don't dispose parentVhd, and fallback to full restore
warn(`can't use differential restore`, err)
disposableDescendants?.dispose()
}
}
// didn't make a negative stream : fallback to classic stream
if (stream === undefined) {
debug('use legacy restore')
stream = await parentVhd.stream()
}
stream.on('end', disposeOnce)
stream.on('close', disposeOnce)
stream.on('error', disposeOnce)
info('everything is ready, will transfer', stream.length)
streams[`${vdiRef}.vhd`] = stream
}
return {
streams,
vbds,
vdis,
version: '1.0.0',
vifs,
vm: { ...vm, suspend_VDI: vmSnapshot.suspend_VDI },
}
}
async #decorateIncrementalVmMetadata() {
const { additionnalVmTag, mapVdisSrs, useDifferentialRestore } = this._importIncrementalVmSettings
const ignoredVdis = new Set(
Object.entries(mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
let backup
if (useDifferentialRestore) {
backup = await this._reuseNearestSnapshot(ignoredVdis)
} else {
backup = await this._adapter.readIncrementalVmBackup(this._metadata, ignoredVdis)
}
const xapi = this._xapi
const cache = new Map()
const mapVdisSrRefs = {}
if (additionnalVmTag !== undefined) {
backup.vm.tags.push(additionnalVmTag)
}
for (const [vdiUuid, srUuid] of Object.entries(mapVdisSrs)) {
mapVdisSrRefs[vdiUuid] = await resolveUuid(xapi, cache, srUuid, 'SR')
}
const sr = await resolveUuid(xapi, cache, this._srUuid, 'SR')
const srRef = await resolveUuid(xapi, cache, this._srUuid, 'SR')
Object.values(backup.vdis).forEach(vdi => {
vdi.SR = mapVdisSrRefs[vdi.uuid] ?? sr.$ref
vdi.SR = mapVdisSrRefs[vdi.uuid] ?? srRef
})
return backup
}
@@ -46,7 +226,7 @@ export class ImportVmBackup {
const isFull = metadata.mode === 'full'
const sizeContainer = { size: 0 }
const { mapVdisSrs, newMacAddresses } = this._importIncrementalVmSettings
const { newMacAddresses } = this._importIncrementalVmSettings
let backup
if (isFull) {
backup = await adapter.readFullVmBackup(metadata)
@@ -54,12 +234,7 @@ export class ImportVmBackup {
} else {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await this.#decorateIncrementalVmMetadata(await adapter.readIncrementalVmBackup(metadata, ignoredVdis))
backup = await this.#decorateIncrementalVmMetadata()
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}
@@ -101,3 +276,5 @@ export class ImportVmBackup {
)
}
}
decorateClass(ImportVmBackup, { _reuseNearestSnapshot: defer })

View File

@@ -67,6 +67,11 @@ async function generateVhd(path, opts = {}) {
await VhdAbstract.createAlias(handler, path + '.alias.vhd', dataPath)
}
if (opts.blocks) {
for (const blockId of opts.blocks) {
await vhd.writeEntireBlock({ id: blockId, buffer: Buffer.alloc(2 * 1024 * 1024 + 512, blockId) })
}
}
await vhd.writeBlockAllocationTable()
await vhd.writeHeader()
await vhd.writeFooter()
@@ -230,7 +235,7 @@ test('it merges delta of non destroyed chain', async () => {
const metadata = JSON.parse(await handler.readFile(`${rootPath}/metadata.json`))
// size should be the size of children + grand children after the merge
assert.equal(metadata.size, 209920)
assert.equal(metadata.size, 104960)
// merging is already tested in vhd-lib, don't retest it here (and theses vhd are as empty as my stomach at 12h12)
// only check deletion
@@ -320,6 +325,7 @@ describe('tests multiple combination ', () => {
const ancestor = await generateVhd(`${basePath}/ancestor.vhd`, {
useAlias,
mode: vhdMode,
blocks: [1, 3],
})
const child = await generateVhd(`${basePath}/child.vhd`, {
useAlias,
@@ -328,6 +334,7 @@ describe('tests multiple combination ', () => {
parentUnicodeName: 'ancestor.vhd' + (useAlias ? '.alias.vhd' : ''),
parentUuid: ancestor.footer.uuid,
},
blocks: [1, 2],
})
// a grand child vhd in metadata
await generateVhd(`${basePath}/grandchild.vhd`, {
@@ -337,6 +344,7 @@ describe('tests multiple combination ', () => {
parentUnicodeName: 'child.vhd' + (useAlias ? '.alias.vhd' : ''),
parentUuid: child.footer.uuid,
},
blocks: [2, 3],
})
// an older parent that was merging in clean
@@ -395,7 +403,7 @@ describe('tests multiple combination ', () => {
const metadata = JSON.parse(await handler.readFile(`${rootPath}/metadata.json`))
// size should be the size of children + grand children + clean after the merge
assert.deepEqual(metadata.size, vhdMode === 'file' ? 314880 : undefined)
assert.deepEqual(metadata.size, vhdMode === 'file' ? 6502400 : 6501888)
// broken vhd, non referenced, abandonned should be deleted ( alias and data)
// ancestor and child should be merged

View File

@@ -36,34 +36,32 @@ const computeVhdsSize = (handler, vhdPaths) =>
)
// chain is [ ancestor, child_1, ..., child_n ]
async function _mergeVhdChain(handler, chain, { logInfo, remove, merge, mergeBlockConcurrency }) {
if (merge) {
logInfo(`merging VHD chain`, { chain })
async function _mergeVhdChain(handler, chain, { logInfo, remove, mergeBlockConcurrency }) {
logInfo(`merging VHD chain`, { chain })
let done, total
const handle = setInterval(() => {
if (done !== undefined) {
logInfo('merge in progress', {
done,
parent: chain[0],
progress: Math.round((100 * done) / total),
total,
})
}
}, 10e3)
try {
return await mergeVhdChain(handler, chain, {
logInfo,
mergeBlockConcurrency,
onProgress({ done: d, total: t }) {
done = d
total = t
},
removeUnused: remove,
let done, total
const handle = setInterval(() => {
if (done !== undefined) {
logInfo('merge in progress', {
done,
parent: chain[0],
progress: Math.round((100 * done) / total),
total,
})
} finally {
clearInterval(handle)
}
}, 10e3)
try {
return await mergeVhdChain(handler, chain, {
logInfo,
mergeBlockConcurrency,
onProgress({ done: d, total: t }) {
done = d
total = t
},
removeUnused: remove,
})
} finally {
clearInterval(handle)
}
}
@@ -471,23 +469,20 @@ export async function cleanVm(
const metadataWithMergedVhd = {}
const doMerge = async () => {
await asyncMap(toMerge, async chain => {
const merged = await limitedMergeVhdChain(handler, chain, {
const { finalVhdSize } = await limitedMergeVhdChain(handler, chain, {
logInfo,
logWarn,
remove,
merge,
mergeBlockConcurrency,
})
if (merged !== undefined) {
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = true
}
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = (metadataWithMergedVhd[metadataPath] ?? 0) + finalVhdSize
})
}
await Promise.all([
...unusedVhdsDeletion,
toMerge.length !== 0 && (merge ? Task.run({ name: 'merge' }, doMerge) : doMerge()),
toMerge.length !== 0 && (merge ? Task.run({ name: 'merge' }, doMerge) : () => Promise.resolve()),
asyncMap(unusedXvas, path => {
logWarn('unused XVA', { path })
if (remove) {
@@ -509,12 +504,11 @@ export async function cleanVm(
// update size for delta metadata with merged VHD
// check for the other that the size is the same as the real file size
await asyncMap(jsons, async metadataPath => {
const metadata = backups.get(metadataPath)
let fileSystemSize
const merged = metadataWithMergedVhd[metadataPath] !== undefined
const mergedSize = metadataWithMergedVhd[metadataPath]
const { mode, size, vhds, xva } = metadata
@@ -524,26 +518,29 @@ export async function cleanVm(
const linkedXva = resolve('/', vmDir, xva)
try {
fileSystemSize = await handler.getSize(linkedXva)
if (fileSystemSize !== size && fileSystemSize !== undefined) {
logWarn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
}
} catch (error) {
// can fail with encrypted remote
}
} else if (mode === 'delta') {
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
// the size is not computed in some cases (e.g. VhdDirectory)
if (fileSystemSize === undefined) {
return
}
// don't warn if the size has changed after a merge
if (!merged && fileSystemSize !== size) {
// FIXME: figure out why it occurs so often and, once fixed, log the real problems with `logWarn`
console.warn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
if (mergedSize === undefined) {
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
// the size is not computed in some cases (e.g. VhdDirectory)
if (fileSystemSize !== undefined && fileSystemSize !== size) {
logWarn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
}
}
}
} catch (error) {
@@ -551,9 +548,19 @@ export async function cleanVm(
return
}
// systematically update size after a merge
if ((merged || fixMetadata) && size !== fileSystemSize) {
metadata.size = fileSystemSize
// systematically update size and differentials after a merge
// @todo : after 2024-04-01 remove the fixmetadata options since the size computation is fixed
if (mergedSize || (fixMetadata && fileSystemSize !== size)) {
metadata.size = mergedSize ?? fileSystemSize ?? size
if (mergedSize) {
// all disks are now key disk
metadata.isVhdDifferencing = {}
for (const id of Object.values(metadata.vdis ?? {})) {
metadata.isVhdDifferencing[`${id}.vhd`] = false
}
}
mustRegenerateCache = true
try {
await handler.writeFile(metadataPath, JSON.stringify(metadata), { flags: 'w' })

View File

@@ -34,6 +34,7 @@ export async function exportIncrementalVm(
fullVdisRequired = new Set(),
disableBaseTags = false,
nbdConcurrency = 1,
preferNbd,
} = {}
) {
@@ -82,6 +83,7 @@ export async function exportIncrementalVm(
baseRef: baseVdi?.$ref,
cancelToken,
format: 'vhd',
nbdConcurrency,
preferNbd,
})
})
@@ -250,6 +252,10 @@ export const importIncrementalVm = defer(async function importIncrementalVm(
// Import VDI contents.
cancelableMap(cancelToken, Object.entries(newVdis), async (cancelToken, [id, vdi]) => {
for (let stream of ensureArray(streams[`${id}.vhd`])) {
if (stream === null) {
// we restore a backup and reuse completly a local snapshot
continue
}
if (typeof stream === 'function') {
stream = await stream()
}

View File

@@ -32,10 +32,10 @@ class IncrementalRemoteVmBackupRunner extends AbstractRemote {
useChain: false,
})
const differentialVhds = {}
const isVhdDifferencing = {}
await asyncEach(Object.entries(incrementalExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
isVhdDifferencing[key] = await isVhdDifferencingDisk(stream)
})
incrementalExport.streams = mapValues(incrementalExport.streams, this._throttleStream)
@@ -43,7 +43,7 @@ class IncrementalRemoteVmBackupRunner extends AbstractRemote {
writer =>
writer.transfer({
deltaExport: forkDeltaExport(incrementalExport),
differentialVhds,
isVhdDifferencing,
timestamp: metadata.timestamp,
vm: metadata.vm,
vmSnapshot: metadata.vmSnapshot,

View File

@@ -41,6 +41,7 @@ export const IncrementalXapi = class IncrementalXapiVmBackupRunner extends Abstr
const deltaExport = await exportIncrementalVm(exportedVm, baseVm, {
fullVdisRequired,
nbdConcurrency: this._settings.nbdConcurrency,
preferNbd: this._settings.preferNbd,
})
// since NBD is network based, if one disk use nbd , all the disk use them
@@ -49,11 +50,11 @@ export const IncrementalXapi = class IncrementalXapiVmBackupRunner extends Abstr
Task.info('Transfer data using NBD')
}
const differentialVhds = {}
const isVhdDifferencing = {}
// since isVhdDifferencingDisk is reading and unshifting data in stream
// it should be done BEFORE any other stream transform
await asyncEach(Object.entries(deltaExport.streams), async ([key, stream]) => {
differentialVhds[key] = await isVhdDifferencingDisk(stream)
isVhdDifferencing[key] = await isVhdDifferencingDisk(stream)
})
const sizeContainers = mapValues(deltaExport.streams, stream => watchStreamSize(stream))
@@ -68,7 +69,7 @@ export const IncrementalXapi = class IncrementalXapiVmBackupRunner extends Abstr
writer =>
writer.transfer({
deltaExport: forkDeltaExport(deltaExport),
differentialVhds,
isVhdDifferencing,
sizeContainers,
timestamp,
vm,

View File

@@ -133,7 +133,7 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
}
}
async _transfer($defer, { differentialVhds, timestamp, deltaExport, vm, vmSnapshot }) {
async _transfer($defer, { isVhdDifferencing, timestamp, deltaExport, vm, vmSnapshot }) {
const adapter = this._adapter
const job = this._job
const scheduleId = this._scheduleId
@@ -161,6 +161,7 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
)
metadataContent = {
isVhdDifferencing,
jobId,
mode: job.mode,
scheduleId,
@@ -180,9 +181,9 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
async ([id, vdi]) => {
const path = `${this._vmBackupDir}/${vhds[id]}`
const isDelta = differentialVhds[`${id}.vhd`]
const isDifferencing = isVhdDifferencing[`${id}.vhd`]
let parentPath
if (isDelta) {
if (isDifferencing) {
const vdiDir = dirname(path)
parentPath = (
await handler.list(vdiDir, {
@@ -204,16 +205,20 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
// TODO remove when this has been done before the export
await checkVhd(handler, parentPath)
}
transferSize += await adapter.writeVhd(path, deltaExport.streams[`${id}.vhd`], {
// don't write it as transferSize += await async function
// since i += await asyncFun lead to race condition
// as explained : https://eslint.org/docs/latest/rules/require-atomic-updates
const transferSizeOneDisk = await adapter.writeVhd(path, deltaExport.streams[`${id}.vhd`], {
// no checksum for VHDs, because they will be invalidated by
// merges and chainings
checksum: false,
validator: tmpPath => checkVhd(handler, tmpPath),
writeBlockConcurrency: this._config.writeBlockConcurrency,
})
transferSize += transferSizeOneDisk
if (isDelta) {
if (isDifferencing) {
await chainVhd(handler, parentPath, handler, path)
}

View File

@@ -96,6 +96,9 @@ export const MixinRemoteWriter = (BaseClass = Object) =>
metadata,
srUuid,
xapi,
settings: {
additionnalVmTag: 'xo:no-bak=Health Check',
},
}).run()
const restoredVm = xapi.getObject(restoredId)
try {

View File

@@ -58,7 +58,7 @@ export const MixinXapiWriter = (BaseClass = Object) =>
)
}
const healthCheckVm = xapi.getObject(healthCheckVmRef) ?? (await xapi.waitObject(healthCheckVmRef))
await healthCheckVm.add_tag('xo:no-bak=Health Check')
await new HealthCheckVmBackup({
restoredVm: healthCheckVm,
xapi,

View File

@@ -221,7 +221,7 @@ For multiple objects:
### Settings
Settings are described in [`@xen-orchestra/backups/Backup.js](https://github.com/vatesfr/xen-orchestra/blob/master/%40xen-orchestra/backups/Backup.js).
Settings are described in [`@xen-orchestra/backups/\_runners/VmsXapi.mjs``](https://github.com/vatesfr/xen-orchestra/blob/master/%40xen-orchestra/backups/_runners/VmsXapi.mjs).
## Writer API

View File

@@ -25,6 +25,8 @@ function formatVmBackup(backup) {
name_description: backup.vm.name_description,
name_label: backup.vm.name_label,
},
differencingVhds: Object.values(backup.isVhdDifferencing).filter(t => t).length,
dynamicVhds: Object.values(backup.isVhdDifferencing).filter(t => !t).length,
}
}

View File

@@ -8,7 +8,7 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "0.43.2",
"version": "0.44.2",
"engines": {
"node": ">=14.18"
},
@@ -23,12 +23,12 @@
"@vates/cached-dns.lookup": "^1.0.0",
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/disposable": "^0.1.4",
"@vates/disposable": "^0.1.5",
"@vates/fuse-vhd": "^2.0.0",
"@vates/nbd-client": "^2.0.0",
"@vates/nbd-client": "^2.0.1",
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/fs": "^4.1.3",
"@xen-orchestra/log": "^0.6.0",
"@xen-orchestra/template": "^0.1.0",
"app-conf": "^2.3.0",
@@ -44,8 +44,8 @@
"proper-lockfile": "^4.1.2",
"tar": "^6.1.15",
"uuid": "^9.0.0",
"vhd-lib": "^4.6.1",
"xen-api": "^1.3.6",
"vhd-lib": "^4.7.0",
"xen-api": "^2.0.0",
"yazl": "^2.5.1"
},
"devDependencies": {
@@ -56,7 +56,7 @@
"tmp": "^0.2.1"
},
"peerDependencies": {
"@xen-orchestra/xapi": "^3.3.0"
"@xen-orchestra/xapi": "^4.0.0"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/cr-seed-cli",
"version": "0.2.0",
"version": "1.0.0",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/cr-seed-cli",
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
@@ -18,7 +18,7 @@
"preferGlobal": true,
"dependencies": {
"golike-defer": "^0.5.1",
"xen-api": "^1.3.6"
"xen-api": "^2.0.0"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/fs",
"version": "4.1.2",
"version": "4.1.3",
"license": "AGPL-3.0-or-later",
"description": "The File System for Xen Orchestra backups.",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",

View File

@@ -33,7 +33,7 @@ import { pRetry } from 'promise-toolbox'
const MAX_PART_SIZE = 1024 * 1024 * 1024 * 5 // 5GB
const MAX_PART_NUMBER = 10000
const MIN_PART_SIZE = 5 * 1024 * 1024
const { warn } = createLogger('xo:fs:s3')
const { debug, info, warn } = createLogger('xo:fs:s3')
export default class S3Handler extends RemoteHandlerAbstract {
#bucket
@@ -453,10 +453,18 @@ export default class S3Handler extends RemoteHandlerAbstract {
if (res.ObjectLockConfiguration?.ObjectLockEnabled === 'Enabled') {
// Workaround for https://github.com/aws/aws-sdk-js-v3/issues/2673
// will automatically add the contentMD5 header to any upload to S3
debug(`Object Lock is enable, enable content md5 header`)
this.#s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this.#s3.config))
}
} catch (error) {
if (error.Code !== 'ObjectLockConfigurationNotFoundError' && error.$metadata.httpStatusCode !== 501) {
// maybe the account doesn't have enought privilege to query the object lock configuration
// be defensive and apply the md5 just in case
if (error.$metadata.httpStatusCode === 403) {
info(`s3 user doesnt have enough privilege to check for Object Lock, enable content MD5 header`)
this.#s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this.#s3.config))
} else if (error.Code === 'ObjectLockConfigurationNotFoundError' || error.$metadata.httpStatusCode === 501) {
info(`Object lock is not available or not configured, don't add the content MD5 header`)
} else {
throw error
}
}

View File

@@ -2,10 +2,19 @@
## **next**
- [VM/Action] Ability to migrate a VM from its view (PR [#7164](https://github.com/vatesfr/xen-orchestra/pull/7164))
- Ability to override host address with `master` URL query param (PR [#7187](https://github.com/vatesfr/xen-orchestra/pull/7187))
- Added tooltip on CPU provisioning warning icon (PR [#7223](https://github.com/vatesfr/xen-orchestra/pull/7223))
- Add indeterminate state on FormToggle component (PR [#7230](https://github.com/vatesfr/xen-orchestra/pull/7230))
- Add new UiStatusPanel component (PR [#7227](https://github.com/vatesfr/xen-orchestra/pull/7227))
## **0.1.6** (2023-11-30)
- Explicit error if users attempt to connect from a slave host (PR [#7110](https://github.com/vatesfr/xen-orchestra/pull/7110))
- More compact UI (PR [#7159](https://github.com/vatesfr/xen-orchestra/pull/7159))
- Fix dashboard host patches list (PR [#7169](https://github.com/vatesfr/xen-orchestra/pull/7169))
- Ability to export selected VMs (PR [#7174](https://github.com/vatesfr/xen-orchestra/pull/7174))
- [VM/Action] Ability to export a VM from its view (PR [#7190](https://github.com/vatesfr/xen-orchestra/pull/7190))
## **0.1.5** (2023-11-07)

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/lite",
"version": "0.1.5",
"version": "0.1.6",
"scripts": {
"dev": "GIT_HEAD=$(git rev-parse HEAD) vite",
"build": "run-p type-check build-only",
@@ -10,57 +10,55 @@
"test": "yarn run type-check",
"type-check": "vue-tsc --noEmit"
},
"dependencies": {
"devDependencies": {
"@fontsource/poppins": "^5.0.8",
"@fortawesome/fontawesome-svg-core": "^6.1.1",
"@fortawesome/free-regular-svg-icons": "^6.2.0",
"@fortawesome/free-solid-svg-icons": "^6.2.0",
"@fortawesome/vue-fontawesome": "^3.0.1",
"@novnc/novnc": "^1.3.0",
"@types/d3-time-format": "^4.0.0",
"@types/file-saver": "^2.0.5",
"@types/lodash-es": "^4.17.6",
"@types/marked": "^4.0.8",
"@vueuse/core": "^10.1.2",
"@vueuse/math": "^10.1.2",
"@fortawesome/fontawesome-svg-core": "^6.4.2",
"@fortawesome/free-regular-svg-icons": "^6.4.2",
"@fortawesome/free-solid-svg-icons": "^6.4.2",
"@fortawesome/vue-fontawesome": "^3.0.5",
"@intlify/unplugin-vue-i18n": "^1.5.0",
"@limegrass/eslint-plugin-import-alias": "^1.1.0",
"@novnc/novnc": "^1.4.0",
"@rushstack/eslint-patch": "^1.5.1",
"@tsconfig/node18": "^18.2.2",
"@types/d3-time-format": "^4.0.3",
"@types/file-saver": "^2.0.7",
"@types/lodash-es": "^4.17.11",
"@types/node": "^18.18.9",
"@vitejs/plugin-vue": "^4.4.0",
"@vue/eslint-config-prettier": "^8.0.0",
"@vue/eslint-config-typescript": "^12.0.0",
"@vue/tsconfig": "^0.4.0",
"@vueuse/core": "^10.5.0",
"@vueuse/math": "^10.5.0",
"complex-matcher": "^0.7.1",
"d3-time-format": "^4.1.0",
"decorator-synchronized": "^0.6.0",
"echarts": "^5.3.3",
"echarts": "^5.4.3",
"eslint-plugin-vue": "^9.18.1",
"file-saver": "^2.0.5",
"highlight.js": "^11.6.0",
"human-format": "^1.1.0",
"highlight.js": "^11.9.0",
"human-format": "^1.2.0",
"iterable-backoff": "^0.1.0",
"json-rpc-2.0": "^1.3.0",
"json5": "^2.2.1",
"json-rpc-2.0": "^1.7.0",
"json5": "^2.2.3",
"limit-concurrency-decorator": "^0.5.0",
"lodash-es": "^4.17.21",
"make-error": "^1.3.6",
"marked": "^4.2.12",
"pinia": "^2.1.2",
"placement.js": "^1.0.0-beta.5",
"vue": "^3.3.4",
"vue-echarts": "^6.2.3",
"vue-i18n": "^9.2.2",
"vue-router": "^4.2.1"
},
"devDependencies": {
"@intlify/unplugin-vue-i18n": "^0.10.0",
"@limegrass/eslint-plugin-import-alias": "^1.0.5",
"@rushstack/eslint-patch": "^1.1.0",
"@types/node": "^16.11.41",
"@vitejs/plugin-vue": "^4.2.3",
"@vue/eslint-config-prettier": "^8.0.0",
"@vue/eslint-config-typescript": "^11.0.0",
"@vue/tsconfig": "^0.1.3",
"eslint-plugin-vue": "^9.0.0",
"marked": "^9.1.5",
"npm-run-all": "^4.1.5",
"postcss": "^8.4.19",
"postcss-custom-media": "^9.0.1",
"postcss-nested": "^6.0.0",
"typescript": "^4.9.3",
"vite": "^4.3.8",
"vue-tsc": "^1.6.5"
"pinia": "^2.1.7",
"placement.js": "^1.0.0-beta.5",
"postcss": "^8.4.31",
"postcss-custom-media": "^10.0.2",
"postcss-nested": "^6.0.1",
"typescript": "^5.2.2",
"vite": "^4.5.0",
"vue": "^3.3.8",
"vue-echarts": "^6.6.1",
"vue-i18n": "^9.6.5",
"vue-router": "^4.2.5",
"vue-tsc": "^1.8.22"
},
"private": true,
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/lite",
@@ -76,6 +74,6 @@
},
"license": "AGPL-3.0-or-later",
"engines": {
"node": ">=8.10"
"node": ">=18"
}
}

View File

@@ -12,6 +12,7 @@
</RouterLink>
<slot />
<div class="right">
<PoolOverrideWarning as-tooltip />
<AccountButton />
</div>
</header>
@@ -19,6 +20,7 @@
<script lang="ts" setup>
import AccountButton from "@/components/AccountButton.vue";
import PoolOverrideWarning from "@/components/PoolOverrideWarning.vue";
import TextLogo from "@/components/TextLogo.vue";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import { useNavigationStore } from "@/stores/navigation.store";
@@ -51,6 +53,10 @@ const { trigger: navigationTrigger } = storeToRefs(navigationStore);
margin-left: 1rem;
vertical-align: middle;
}
.warning-not-current-pool {
font-size: 2.4rem;
}
}
.right {

View File

@@ -2,6 +2,7 @@
<div class="app-login form-container">
<form @submit.prevent="handleSubmit">
<img alt="XO Lite" src="../assets/logo-title.svg" />
<PoolOverrideWarning />
<p v-if="isHostIsSlaveErr(error)" class="error">
<UiIcon :icon="faExclamationCircle" />
{{ $t("login-only-on-master") }}
@@ -45,6 +46,7 @@ import FormCheckbox from "@/components/form/FormCheckbox.vue";
import FormInput from "@/components/form/FormInput.vue";
import FormInputWrapper from "@/components/form/FormInputWrapper.vue";
import LoginError from "@/components/LoginError.vue";
import PoolOverrideWarning from "@/components/PoolOverrideWarning.vue";
import UiButton from "@/components/ui/UiButton.vue";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import type { XenApiError } from "@/libs/xen-api/xen-api.types";

View File

@@ -1,49 +1,28 @@
<template>
<div class="page-under-construction">
<img alt="Under construction" src="@/assets/under-construction.svg" />
<p class="title">{{ $t("xo-lite-under-construction") }}</p>
<p class="subtitle">{{ $t("new-features-are-coming") }}</p>
<UiStatusPanel
:image-source="underConstruction"
:subtitle="$t('new-features-are-coming')"
:title="$t('xo-lite-under-construction')"
>
<p class="contact">
{{ $t("do-you-have-needs") }}
<a
href="https://xcp-ng.org/forum/topic/5018/xo-lite-building-an-embedded-ui-in-xcp-ng"
target="_blank"
rel="noopener noreferrer"
target="_blank"
>
{{ $t("here") }}
</a>
</p>
</div>
</UiStatusPanel>
</template>
<script lang="ts" setup>
import underConstruction from "@/assets/under-construction.svg";
import UiStatusPanel from "@/components/ui/UiStatusPanel.vue";
</script>
<style lang="postcss" scoped>
.page-under-construction {
width: 100%;
min-height: 76.5vh;
height: 100%;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
color: var(--color-extra-blue-base);
}
img {
margin-bottom: 40px;
width: 30%;
}
.title {
font-weight: 400;
font-size: 36px;
text-align: center;
}
.subtitle {
font-weight: 500;
font-size: 24px;
margin: 21px 0;
text-align: center;
}
.contact {
font-weight: 400;
font-size: 20px;

View File

@@ -0,0 +1,59 @@
<template>
<div
v-if="xenApi.isPoolOverridden"
class="warning-not-current-pool"
@click="xenApi.resetPoolMasterIp"
v-tooltip="
asTooltip && {
placement: 'right',
content: `
${$t('you-are-currently-on', [masterSessionStorage])}.
${$t('click-to-return-default-pool')}
`,
}
"
>
<div class="wrapper">
<UiIcon :icon="faWarning" />
<p v-if="!asTooltip">
<i18n-t keypath="you-are-currently-on">
<strong>{{ masterSessionStorage }}</strong>
</i18n-t>
<br />
{{ $t("click-to-return-default-pool") }}
</p>
</div>
</div>
</template>
<script lang="ts" setup>
import { faWarning } from "@fortawesome/free-solid-svg-icons";
import { useSessionStorage } from "@vueuse/core";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import { useXenApiStore } from "@/stores/xen-api.store";
import { vTooltip } from "@/directives/tooltip.directive";
defineProps<{
asTooltip?: boolean;
}>();
const xenApi = useXenApiStore();
const masterSessionStorage = useSessionStorage("master", null);
</script>
<style lang="postcss" scoped>
.warning-not-current-pool {
color: var(--color-orange-world-base);
cursor: pointer;
.wrapper {
display: flex;
justify-content: center;
svg {
margin: auto 1rem;
}
}
}
</style>

View File

@@ -6,7 +6,7 @@
>
<input
v-model="value"
:class="{ indeterminate: type === 'checkbox' && value === undefined }"
:class="{ indeterminate: isIndeterminate }"
:disabled="isDisabled"
:type="type === 'radio' ? 'radio' : 'checkbox'"
class="input"
@@ -60,6 +60,10 @@ const icon = computed(() => {
return faCheck;
});
const isIndeterminate = computed(
() => (type === "checkbox" || type === "toggle") && value.value === undefined
);
</script>
<style lang="postcss" scoped>
@@ -127,6 +131,12 @@ const icon = computed(() => {
.input:checked + .fake-checkbox > .icon {
transform: translateX(0.7em);
}
.input.indeterminate + .fake-checkbox > .icon {
opacity: 1;
color: var(--color-blue-scale-300);
transform: translateX(0);
}
}
.input {

View File

@@ -3,8 +3,14 @@
<UiCardTitle>
{{ $t("cpu-provisioning") }}
<template v-if="!hasError" #right>
<!-- TODO: add a tooltip for the warning icon -->
<UiStatusIcon v-if="state !== 'success'" :state="state" />
<UiStatusIcon
v-if="state !== 'success'"
v-tooltip="{
content: $t('cpu-provisioning-warning'),
placement: 'left',
}"
:state="state"
/>
</template>
</UiCardTitle>
<NoDataError v-if="hasError" />
@@ -37,11 +43,12 @@ import UiCard from "@/components/ui/UiCard.vue";
import UiCardFooter from "@/components/ui/UiCardFooter.vue";
import UiCardSpinner from "@/components/ui/UiCardSpinner.vue";
import UiCardTitle from "@/components/ui/UiCardTitle.vue";
import { useHostCollection } from "@/stores/xen-api/host.store";
import { useVmCollection } from "@/stores/xen-api/vm.store";
import { useVmMetricsCollection } from "@/stores/xen-api/vm-metrics.store";
import { vTooltip } from "@/directives/tooltip.directive";
import { percent } from "@/libs/utils";
import { VM_POWER_STATE } from "@/libs/xen-api/xen-api.enums";
import { useHostCollection } from "@/stores/xen-api/host.store";
import { useVmMetricsCollection } from "@/stores/xen-api/vm-metrics.store";
import { useVmCollection } from "@/stores/xen-api/vm.store";
import { logicAnd } from "@vueuse/math";
import { computed } from "vue";

View File

@@ -0,0 +1,47 @@
<template>
<div class="ui-status-panel">
<img :src="imageSource" alt="" class="image" />
<p v-if="title !== undefined" class="title">{{ title }}</p>
<p v-if="subtitle !== undefined" class="subtitle">{{ subtitle }}</p>
<slot />
</div>
</template>
<script lang="ts" setup>
defineProps<{
imageSource: string;
title?: string;
subtitle?: string;
}>();
</script>
<style lang="postcss" scoped>
.ui-status-panel {
width: 100%;
min-height: 76.5vh;
height: 100%;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
color: var(--color-extra-blue-base);
}
.title {
font-weight: 400;
font-size: 36px;
text-align: center;
}
.subtitle {
font-weight: 500;
font-size: 24px;
margin: 21px 0;
text-align: center;
}
.image {
margin-bottom: 40px;
width: 30%;
}
</style>

View File

@@ -11,7 +11,7 @@ import { useContext } from "@/composables/context.composable";
import { ColorContext, DisabledContext } from "@/context";
import type { Color } from "@/types";
import { IK_MODAL } from "@/types/injection-keys";
import { useMagicKeys, whenever } from "@vueuse/core/index";
import { useMagicKeys, whenever } from "@vueuse/core";
import { inject } from "vue";
const props = defineProps<{

View File

@@ -3,13 +3,13 @@
v-tooltip="
vmRefs.length > 0 &&
!isSomeExportable &&
$t('no-selected-vm-can-be-exported')
$t(isSingleAction ? 'vm-is-running' : 'no-selected-vm-can-be-exported')
"
:icon="faDisplay"
:disabled="isDisabled"
@click="openModal"
>
{{ $t("export-vms") }}
{{ $t(isSingleAction ? "export-vm" : "export-vms") }}
</MenuItem>
</template>
<script lang="ts" setup>
@@ -26,7 +26,10 @@ import { vTooltip } from "@/directives/tooltip.directive";
import type { XenApiVm } from "@/libs/xen-api/xen-api.types";
const props = defineProps<{ vmRefs: XenApiVm["$ref"][] }>();
const props = defineProps<{
vmRefs: XenApiVm["$ref"][];
isSingleAction?: boolean;
}>();
const { getByOpaqueRefs, areSomeOperationAllowed } = useVmCollection();

View File

@@ -3,7 +3,11 @@
v-tooltip="
selectedRefs.length > 0 &&
!isMigratable &&
$t('no-selected-vm-can-be-migrated')
$t(
isSingleAction
? 'this-vm-cant-be-migrated'
: 'no-selected-vm-can-be-migrated'
)
"
:busy="isMigrating"
:disabled="isParentDisabled || !isMigratable"
@@ -28,6 +32,7 @@ import { computed } from "vue";
const props = defineProps<{
selectedRefs: XenApiVm["$ref"][];
isSingleAction?: boolean;
}>();
const { getByOpaqueRefs, isOperationPending, areSomeOperationAllowed } =

View File

@@ -26,7 +26,9 @@
/>
</template>
<VmActionCopyItem :selected-refs="[vm.$ref]" is-single-action />
<VmActionExportItem :vm-refs="[vm.$ref]" is-single-action />
<VmActionSnapshotItem :vm-refs="[vm.$ref]" />
<VmActionMigrateItem :selected-refs="[vm.$ref]" is-single-action />
</AppMenu>
</template>
</TitleBar>
@@ -37,9 +39,11 @@ import AppMenu from "@/components/menu/AppMenu.vue";
import TitleBar from "@/components/TitleBar.vue";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import UiButton from "@/components/ui/UiButton.vue";
import VmActionMigrateItem from "@/components/vm/VmActionItems/VmActionMigrateItem.vue";
import VmActionPowerStateItems from "@/components/vm/VmActionItems/VmActionPowerStateItems.vue";
import VmActionSnapshotItem from "@/components/vm/VmActionItems/VmActionSnapshotItem.vue";
import VmActionCopyItem from "@/components/vm/VmActionItems/VmActionCopyItem.vue";
import VmActionExportItem from "@/components/vm/VmActionItems/VmActionExportItem.vue";
import { useVmCollection } from "@/stores/xen-api/vm.store";
import { vTooltip } from "@/directives/tooltip.directive";
import type { XenApiVm } from "@/libs/xen-api/xen-api.types";

View File

@@ -1,4 +1,4 @@
import type { HighlightResult, Language } from "highlight.js";
import type { HighlightResult } from "highlight.js";
import HLJS from "highlight.js/lib/core";
import cssLang from "highlight.js/lib/languages/css";
import jsonLang from "highlight.js/lib/languages/json";
@@ -19,10 +19,6 @@ export const highlight: (
ignoreIllegals?: boolean
) => HighlightResult = HLJS.highlight;
export const getLanguage: (
languageName: AcceptedLanguage
) => Language | undefined = HLJS.getLanguage;
export type AcceptedLanguage =
| "xml"
| "css"

View File

@@ -1,8 +1,5 @@
import {
type AcceptedLanguage,
getLanguage,
highlight,
} from "@/libs/highlight";
import { type AcceptedLanguage, highlight } from "@/libs/highlight";
import HLJS from "highlight.js/lib/core";
import { marked } from "marked";
enum VUE_TAG {
@@ -11,15 +8,26 @@ enum VUE_TAG {
STYLE = "vue-style",
}
function extractLang(lang: string | undefined): AcceptedLanguage | VUE_TAG {
if (lang === undefined) {
return "plaintext";
}
if (Object.values(VUE_TAG).includes(lang as VUE_TAG)) {
return lang as VUE_TAG;
}
if (HLJS.getLanguage(lang) !== undefined) {
return lang as AcceptedLanguage;
}
return "plaintext";
}
marked.use({
renderer: {
code(str: string, lang: AcceptedLanguage) {
const code = customHighlight(
str,
Object.values(VUE_TAG).includes(lang as VUE_TAG) || getLanguage(lang)
? lang
: "plaintext"
);
code(str, lang) {
const code = customHighlight(str, extractLang(lang));
return `<pre class="hljs"><button class="copy-button" type="button">Copy</button><code class="hljs-code">${code}</code></pre>`;
},
},

View File

@@ -27,6 +27,7 @@
"cancel": "Cancel",
"change-state": "Change state",
"click-to-display-alarms": "Click to display alarms:",
"click-to-return-default-pool": "Click here to return to the default pool",
"close": "Close",
"coming-soon": "Coming soon!",
"community": "Community",
@@ -37,6 +38,7 @@
"console-unavailable": "Console unavailable",
"copy": "Copy",
"cpu-provisioning": "CPU provisioning",
"cpu-provisioning-warning": "The number of vCPUs allocated exceeds the number of physical CPUs available. System performance could be affected",
"cpu-usage": "CPU usage",
"dashboard": "Dashboard",
"delete": "Delete",
@@ -55,6 +57,7 @@
"export-n-vms": "Export 1 VM | Export {n} VMs",
"export-n-vms-manually": "Export 1 VM manually | Export {n} VMs manually",
"export-table-to": "Export table to {type}",
"export-vm": "Export VM",
"export-vms": "Export VMs",
"export-vms-manually-information": "Some VM exports were not able to start automatically, probably due to your browser settings. To export them, you should click on each one. (Alternatively, copy the link as well.)",
"fetching-fresh-data": "Fetching fresh data",
@@ -176,6 +179,7 @@
"theme-auto": "Auto",
"theme-dark": "Dark",
"theme-light": "Light",
"this-vm-cant-be-migrated": "This VM can't be migrated",
"top-#": "Top {n}",
"total-cpus": "Total CPUs",
"total-free": "Total free",
@@ -188,5 +192,6 @@
"vm-is-running": "The VM is running",
"vms": "VMs",
"xo-lite-under-construction": "XOLite is under construction",
"you-are-currently-on": "You are currently on: {0}",
"zstd": "zstd"
}

View File

@@ -27,6 +27,7 @@
"cancel": "Annuler",
"change-state": "Changer l'état",
"click-to-display-alarms": "Cliquer pour afficher les alarmes :",
"click-to-return-default-pool": "Cliquer ici pour revenir au pool par défaut",
"close": "Fermer",
"coming-soon": "Bientôt disponible !",
"community": "Communauté",
@@ -37,6 +38,7 @@
"console-unavailable": "Console indisponible",
"copy": "Copier",
"cpu-provisioning": "Provisionnement CPU",
"cpu-provisioning-warning": "Le nombre de vCPU alloués dépasse le nombre de CPU physique disponible. Les performances du système pourraient être affectées",
"cpu-usage": "Utilisation CPU",
"dashboard": "Tableau de bord",
"delete": "Supprimer",
@@ -55,6 +57,7 @@
"export-n-vms": "Exporter 1 VM | Exporter {n} VMs",
"export-n-vms-manually": "Exporter 1 VM manuellement | Exporter {n} VMs manuellement",
"export-table-to": "Exporter le tableau en {type}",
"export-vm": "Exporter la VM",
"export-vms": "Exporter les VMs",
"export-vms-manually-information": "Certaines exportations de VMs n'ont pas pu démarrer automatiquement, peut-être en raison des paramètres du navigateur. Pour les exporter, vous devrez cliquer sur chacune d'entre elles. (Ou copier le lien.)",
"fetching-fresh-data": "Récupération de données à jour",
@@ -176,6 +179,7 @@
"theme-auto": "Auto",
"theme-dark": "Sombre",
"theme-light": "Clair",
"this-vm-cant-be-migrated": "Cette VM ne peut pas être migrée",
"top-#": "Top {n}",
"total-cpus": "Total CPUs",
"total-free": "Total libre",
@@ -188,5 +192,6 @@
"vm-is-running": "La VM est en cours d'exécution",
"vms": "VMs",
"xo-lite-under-construction": "XOLite est en construction",
"you-are-currently-on": "Vous êtes actuellement sur : {0}",
"zstd": "zstd"
}

View File

@@ -1,8 +1,10 @@
import XapiStats from "@/libs/xapi-stats";
import XenApi from "@/libs/xen-api/xen-api";
import { useLocalStorage } from "@vueuse/core";
import { useLocalStorage, useSessionStorage, whenever } from "@vueuse/core";
import { defineStore } from "pinia";
import { computed, ref, watchEffect } from "vue";
import { useRouter } from "vue-router";
import { useRoute } from "vue-router";
const HOST_URL = import.meta.env.PROD
? window.origin
@@ -15,7 +17,27 @@ enum STATUS {
}
export const useXenApiStore = defineStore("xen-api", () => {
const xenApi = new XenApi(HOST_URL);
// undefined not correctly handled. See https://github.com/vueuse/vueuse/issues/3595
const masterSessionStorage = useSessionStorage<null | string>("master", null);
const router = useRouter();
const route = useRoute();
whenever(
() => route.query.master,
async (newMaster) => {
masterSessionStorage.value = newMaster as string;
await router.replace({ query: { ...route.query, master: undefined } });
window.location.reload();
}
);
const hostUrl = new URL(HOST_URL);
if (masterSessionStorage.value !== null) {
hostUrl.hostname = masterSessionStorage.value;
}
const isPoolOverridden = hostUrl.origin !== new URL(HOST_URL).origin;
const xenApi = new XenApi(hostUrl.origin);
const xapiStats = new XapiStats(xenApi);
const storedSessionId = useLocalStorage<string | undefined>(
"sessionId",
@@ -75,14 +97,21 @@ export const useXenApiStore = defineStore("xen-api", () => {
status.value = STATUS.DISCONNECTED;
}
function resetPoolMasterIp() {
masterSessionStorage.value = null;
window.location.reload();
}
return {
isConnected,
isConnecting,
isPoolOverridden,
connect,
reconnect,
disconnect,
getXapi,
getXapiStats,
currentSessionId,
resetPoolMasterIp,
};
});

View File

@@ -135,23 +135,15 @@
</UiCard>
<UiCard class="group">
<UiCardTitle>{{ $t("language") }}</UiCardTitle>
<UiKeyValueList>
<UiKeyValueRow>
<template #value>
<FormWidget class="full-length" :before="faEarthAmericas">
<select v-model="$i18n.locale">
<option
:value="locale"
v-for="locale in $i18n.availableLocales"
:key="locale"
>
{{ locales[locale].name ?? locale }}
</option>
</select>
</FormWidget>
</template>
</UiKeyValueRow>
</UiKeyValueList>
<FormSelect :before="faEarthAmericas" v-model="$i18n.locale">
<option
:value="locale"
v-for="locale in $i18n.availableLocales"
:key="locale"
>
{{ locales[locale].name ?? locale }}
</option>
</FormSelect>
</UiCard>
</div>
</template>
@@ -174,7 +166,7 @@ import {
faGear,
faCheck,
} from "@fortawesome/free-solid-svg-icons";
import FormWidget from "@/components/FormWidget.vue";
import FormSelect from "@/components/form/FormSelect.vue";
import TitleBar from "@/components/TitleBar.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiKeyValueList from "@/components/ui/UiKeyValueList.vue";
@@ -249,8 +241,4 @@ h5 {
}
}
}
.full-length {
width: 100%;
}
</style>

View File

@@ -2,16 +2,17 @@
<div :class="{ 'no-ui': !uiStore.hasUi }" class="vm-console-view">
<div v-if="hasError">{{ $t("error-occurred") }}</div>
<UiSpinner v-else-if="!isReady" class="spinner" />
<div v-else-if="!isVmRunning" class="not-running">
<div><img alt="" src="@/assets/monitor.svg" /></div>
{{ $t("power-on-for-console") }}
</div>
<UiStatusPanel
v-else-if="!isVmRunning"
:image-source="monitor"
:title="$t('power-on-for-console')"
/>
<template v-else-if="vm && vmConsole">
<AppMenu horizontal>
<MenuItem
v-if="uiStore.hasUi"
:icon="faArrowUpRightFromSquare"
@click="openInNewTab"
v-if="uiStore.hasUi"
>
{{ $t("open-console-in-new-tab") }}
</MenuItem>
@@ -44,10 +45,12 @@
</template>
<script lang="ts" setup>
import monitor from "@/assets/monitor.svg";
import AppMenu from "@/components/menu/AppMenu.vue";
import MenuItem from "@/components/menu/MenuItem.vue";
import RemoteConsole from "@/components/RemoteConsole.vue";
import UiSpinner from "@/components/ui/UiSpinner.vue";
import UiStatusPanel from "@/components/ui/UiStatusPanel.vue";
import { VM_OPERATION, VM_POWER_STATE } from "@/libs/xen-api/xen-api.enums";
import type { XenApiVm } from "@/libs/xen-api/xen-api.types";
import { usePageTitleStore } from "@/stores/page-title.store";
@@ -158,7 +161,6 @@ const openInNewTab = () => {
height: 100%;
}
.not-running,
.not-available {
display: flex;
align-items: center;

View File

@@ -1,5 +1,5 @@
{
"extends": "@vue/tsconfig/tsconfig.node.json",
"extends": ["@tsconfig/node18/tsconfig.json", "@vue/tsconfig/tsconfig.json"],
"include": ["vite.config.*", "vitest.config.*", "cypress.config.*"],
"compilerOptions": {
"composite": true,

View File

@@ -1,10 +1,9 @@
{
"extends": "@vue/tsconfig/tsconfig.web.json",
"extends": "@vue/tsconfig/tsconfig.dom.json",
"include": ["env.d.ts", "src/**/*", "src/**/*.vue"],
"exclude": ["src/stories/**/*"],
"compilerOptions": {
"experimentalDecorators": true,
"lib": ["ES2019", "ES2020.Intl", "dom"],
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "@xen-orchestra/proxy",
"version": "0.26.38",
"version": "0.26.41",
"license": "AGPL-3.0-or-later",
"description": "XO Proxy used to remotely execute backup jobs",
"keywords": [
@@ -30,15 +30,15 @@
"@vates/cached-dns.lookup": "^1.0.0",
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/disposable": "^0.1.4",
"@vates/disposable": "^0.1.5",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.43.2",
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/backups": "^0.44.2",
"@xen-orchestra/fs": "^4.1.3",
"@xen-orchestra/log": "^0.6.0",
"@xen-orchestra/mixin": "^0.1.0",
"@xen-orchestra/mixins": "^0.14.0",
"@xen-orchestra/self-signed": "^0.1.3",
"@xen-orchestra/xapi": "^3.3.0",
"@xen-orchestra/xapi": "^4.0.0",
"ajv": "^8.0.3",
"app-conf": "^2.3.0",
"async-iterator-to-stream": "^1.1.0",
@@ -60,7 +60,7 @@
"source-map-support": "^0.5.16",
"stoppable": "^1.0.6",
"xdg-basedir": "^5.1.0",
"xen-api": "^1.3.6",
"xen-api": "^2.0.0",
"xo-common": "^0.8.0"
},
"devDependencies": {

View File

@@ -43,7 +43,7 @@
"pw": "^0.0.4",
"xdg-basedir": "^4.0.0",
"xo-lib": "^0.11.1",
"xo-vmdk-to-vhd": "^2.5.6"
"xo-vmdk-to-vhd": "^2.5.7"
},
"devDependencies": {
"@babel/cli": "^7.0.0",

View File

@@ -1,7 +1,7 @@
{
"license": "ISC",
"private": false,
"version": "0.3.0",
"version": "0.3.1",
"name": "@xen-orchestra/vmware-explorer",
"dependencies": {
"@vates/node-vsphere-soap": "^2.0.0",
@@ -10,7 +10,7 @@
"@xen-orchestra/log": "^0.6.0",
"lodash": "^4.17.21",
"node-fetch": "^3.3.0",
"vhd-lib": "^4.6.1"
"vhd-lib": "^4.7.0"
},
"engines": {
"node": ">=14"

View File

@@ -3,6 +3,7 @@ import { asyncMap } from '@xen-orchestra/async-map'
import { decorateClass } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { incorrectState, operationFailed } from 'xo-common/api-errors.js'
import pRetry from 'promise-toolbox/retry'
import { getCurrentVmUuid } from './_XenStore.mjs'
@@ -69,7 +70,12 @@ class Host {
if (await this.getField('host', ref, 'enabled')) {
await this.callAsync('host.disable', ref)
$defer(async () => {
await this.callAsync('host.enable', ref)
await pRetry(() => this.callAsync('host.enable', ref), {
delay: 10e3,
retries: 6,
when: { code: 'HOST_STILL_BOOTING' },
})
// Resuming VMs should occur after host enabling to avoid triggering a 'NO_HOSTS_AVAILABLE' error
return asyncEach(suspendedVms, vmRef => this.callAsync('VM.resume', vmRef, false, false))
})

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/xapi",
"version": "3.3.0",
"version": "4.0.0",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/xapi",
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
@@ -16,7 +16,7 @@
},
"main": "./index.mjs",
"peerDependencies": {
"xen-api": "^1.3.6"
"xen-api": "^2.0.0"
},
"scripts": {
"postversion": "npm publish --access public",
@@ -25,7 +25,7 @@
"dependencies": {
"@vates/async-each": "^1.0.0",
"@vates/decorate-with": "^2.0.0",
"@vates/nbd-client": "^2.0.0",
"@vates/nbd-client": "^2.0.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"d3-time-format": "^4.1.0",
@@ -34,7 +34,7 @@
"json-rpc-protocol": "^0.13.2",
"lodash": "^4.17.15",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.6.1",
"vhd-lib": "^4.7.0",
"xo-common": "^0.8.0"
},
"private": false,

View File

@@ -3,17 +3,17 @@ import pCatch from 'promise-toolbox/catch'
import pRetry from 'promise-toolbox/retry'
import { createLogger } from '@xen-orchestra/log'
import { decorateClass } from '@vates/decorate-with'
import { finished } from 'node:stream'
import { strict as assert } from 'node:assert'
import extractOpaqueRef from './_extractOpaqueRef.mjs'
import NbdClient from '@vates/nbd-client'
import { createNbdRawStream, createNbdVhdStream } from 'vhd-lib/createStreamNbd.js'
import MultiNbdClient from '@vates/nbd-client/multi.mjs'
import { createNbdVhdStream, createNbdRawStream } from 'vhd-lib/createStreamNbd.js'
import { VDI_FORMAT_RAW, VDI_FORMAT_VHD } from './index.mjs'
const { warn } = createLogger('xo:xapi:vdi')
const noop = Function.prototype
class Vdi {
async clone(vdiRef) {
return extractOpaqueRef(await this.callAsync('VDI.clone', vdiRef))
@@ -64,13 +64,13 @@ class Vdi {
})
}
async _getNbdClient(ref) {
async _getNbdClient(ref, { nbdConcurrency = 1 } = {}) {
const nbdInfos = await this.call('VDI.get_nbd_info', ref)
if (nbdInfos.length > 0) {
// a little bit of randomization to spread the load
const nbdInfo = nbdInfos[Math.floor(Math.random() * nbdInfos.length)]
try {
const nbdClient = new NbdClient(nbdInfo, this._nbdOptions)
const nbdClient = new MultiNbdClient(nbdInfos, { ...this._nbdOptions, nbdConcurrency })
await nbdClient.connect()
return nbdClient
} catch (err) {
@@ -83,7 +83,10 @@ class Vdi {
}
}
async exportContent(ref, { baseRef, cancelToken = CancelToken.none, format, preferNbd = this._preferNbd }) {
async exportContent(
ref,
{ baseRef, cancelToken = CancelToken.none, format, nbdConcurrency = 1, preferNbd = this._preferNbd }
) {
const query = {
format,
vdi: ref,
@@ -97,19 +100,23 @@ class Vdi {
let nbdClient, stream
try {
if (preferNbd) {
nbdClient = await this._getNbdClient(ref)
nbdClient = await this._getNbdClient(ref, { nbdConcurrency })
}
// the raw nbd export does not need to peek ath the vhd source
if (nbdClient !== undefined && format === VDI_FORMAT_RAW) {
stream = createNbdRawStream(nbdClient)
} else {
// raw export without nbd or vhd exports needs a resource stream
const vdiName = await this.getField('VDI', ref, 'name_label')
stream = await this.getResource(cancelToken, '/export_raw_vdi/', {
query,
task: await this.task_create(`Exporting content of VDI ${await this.getField('VDI', ref, 'name_label')}`),
task: await this.task_create(`Exporting content of VDI ${vdiName}`),
})
if (nbdClient !== undefined && format === VDI_FORMAT_VHD) {
const taskRef = await this.task_create(`Exporting content of VDI ${vdiName} using NBD`)
stream = await createNbdVhdStream(nbdClient, stream)
stream.on('progress', progress => this.call('task.set_progress', taskRef, progress))
finished(stream, () => this.task_destroy(taskRef))
}
}
return stream

View File

@@ -491,7 +491,7 @@ class Vm {
exportedVmRef = await this.VM_snapshot(vmRef, { cancelToken, name_label: `[XO Export] ${vm.name_label}` })
destroySnapshot = () =>
this.VM_destroy(exportedVmRef).catch(error => {
warn('VM_export: failed to destroy snapshots', {
warn('VM_export: failed to destroy snapshot', {
error,
snapshotRef: exportedVmRef,
vmRef,

View File

@@ -1,7 +1,61 @@
# ChangeLog
## **5.89.0** (2023-11-30)
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Highlights
- [Restore] Show source remote and restoration time on a restored VM (PR [#7186](https://github.com/vatesfr/xen-orchestra/pull/7186))
- [Backup/Import] Show disk import status during Incremental Replication or restoration of Incremental Backup (PR [#7171](https://github.com/vatesfr/xen-orchestra/pull/7171))
- [VM/Console] Add a message to indicate that the console view has been [disabled](https://support.citrix.com/article/CTX217766/how-to-disable-the-console-for-the-vm-in-xencenter) for this VM [#6319](https://github.com/vatesfr/xen-orchestra/issues/6319) (PR [#7161](https://github.com/vatesfr/xen-orchestra/pull/7161))
- [REST API] `tags` property can be updated (PR [#7196](https://github.com/vatesfr/xen-orchestra/pull/7196))
- [REST API] A VDI export can now be imported in an existing VDI (PR [#7199](https://github.com/vatesfr/xen-orchestra/pull/7199))
- [REST API] Support VM import using the XVA format
- [File Restore] API method `backupNg.mountPartition` to manually mount a backup disk on the XOA
- [Backup] Implement differential restore (PR [#7202](https://github.com/vatesfr/xen-orchestra/pull/7202))
- [VM/Disks] Display task information when importing VDIs (PR [#7197](https://github.com/vatesfr/xen-orchestra/pull/7197))
- [VM Creation] Added ISO option in new VM form when creating from template with a disk [#3464](https://github.com/vatesfr/xen-orchestra/issues/3464) (PR [#7166](https://github.com/vatesfr/xen-orchestra/pull/7166))
- [Task] Show the related SR on the Garbage Collector Task ( vdi coalescing) (PR [#7189](https://github.com/vatesfr/xen-orchestra/pull/7189))
### Enhancements
- [Netbox] Ability to synchronize XO users as Netbox tenants (PR [#7158](https://github.com/vatesfr/xen-orchestra/pull/7158))
- [Backup] Don't backup VM with tag xo:no-bak (PR [#7173](https://github.com/vatesfr/xen-orchestra/pull/7173))
### Bug fixes
- [Backup/Restore] In case of snapshot with memory, create the suspend VDI on the correct SR instead of the default one
- [Import/ESXi] Handle `Cannot read properties of undefined (reading 'perDatastoreUsage')` error when importing VM without storage (PR [#7168](https://github.com/vatesfr/xen-orchestra/pull/7168))
- [Export/OVA] Handle export with resulting disk larger than 8.2GB (PR [#7183](https://github.com/vatesfr/xen-orchestra/pull/7183))
- [Self Service] Fix error displayed after adding a VM to a resource set (PR [#7144](https://github.com/vatesfr/xen-orchestra/pull/7144))
- [Backup/HealthCheck] Don't backup VM created by health check when using smart mode (PR [#7173](https://github.com/vatesfr/xen-orchestra/pull/7173))
### Released packages
- vhd-lib 4.7.0
- @vates/multi-key-map 0.2.0
- @vates/disposable 0.1.5
- @xen-orchestra/fs 4.1.3
- xen-api 2.0.0
- @vates/nbd-client 2.0.1
- @xen-orchestra/xapi 4.0.0
- @xen-orchestra/backups 0.44.2
- @xen-orchestra/backups-cli 1.0.14
- @xen-orchestra/cr-seed-cli 1.0.0
- @xen-orchestra/proxy 0.26.41
- xo-vmdk-to-vhd 2.5.7
- @xen-orchestra/vmware-explorer 0.3.1
- xapi-explore-sr 0.4.2
- xo-cli 0.22.0
- xo-server 5.129.0
- xo-server-netbox 1.4.0
- xo-web 5.130.0
## **5.88.2** (2023-11-13)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
### Enhancement
- [REST API] Add `users` collection
@@ -13,8 +67,6 @@
## **5.88.1** (2023-11-07)
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Bug fixes
- [Netbox] Fix VMs' `site` property being unnecessarily updated on some versions of Netbox (PR [#7145](https://github.com/vatesfr/xen-orchestra/pull/7145))
@@ -83,8 +135,6 @@
## **5.87.0** (2023-09-29)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
### Highlights
- [Patches] Support new XenServer Updates system. See [our documentation](https://xen-orchestra.com/docs/updater.html#xenserver-updates). (PR [#7044](https://github.com/vatesfr/xen-orchestra/pull/7044))

View File

@@ -7,20 +7,29 @@
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [Netbox] Ability to synchronize XO users as Netbox tenants (PR [#7158](https://github.com/vatesfr/xen-orchestra/pull/7158))
- [VM/Console] Add a message to indicate that the console view has been [disabled](https://support.citrix.com/article/CTX217766/how-to-disable-the-console-for-the-vm-in-xencenter) for this VM [#6319](https://github.com/vatesfr/xen-orchestra/issues/6319) (PR [#7161](https://github.com/vatesfr/xen-orchestra/pull/7161))
- [Restore] Show source remote and restoration time on a restored VM (PR [#7186](https://github.com/vatesfr/xen-orchestra/pull/7186))
- [Backup/Import] Show disk import status during Incremental Replication or restoration of Incremental Backup (PR [#7171](https://github.com/vatesfr/xen-orchestra/pull/7171))
- [VM Creation] Added ISO option in new VM form when creating from template with a disk [#3464](https://github.com/vatesfr/xen-orchestra/issues/3464) (PR [#7166](https://github.com/vatesfr/xen-orchestra/pull/7166))
- [Forget SR] Changed the modal message and added a confirmation text to be sure the action is understood by the user [#7148](https://github.com/vatesfr/xen-orchestra/issues/7148) (PR [#7155](https://github.com/vatesfr/xen-orchestra/pull/7155))
- [REST API] `/backups` has been renamed to `/backup` (redirections are in place for compatibility)
- [REST API] _VM backup & Replication_ jobs have been moved from `/backup/jobs/:id` to `/backup/jobs/vm/:id` (redirections are in place for compatibility)
- [REST API] _XO config & Pool metadata Backup_ jobs are available at `/backup/jobs/metadata`
- [REST API] _Mirror Backup_ jobs are available at `/backup/jobs/mirror`
- [Plugin/auth-saml] Add _Force re-authentication_ setting [Forum#67764](https://xcp-ng.org/forum/post/67764) (PR [#7232](https://github.com/vatesfr/xen-orchestra/pull/7232))
- [HTTP] `http.useForwardedHeaders` setting can be enabled when XO is behind a reverse proxy to fetch clients IP addresses from `X-Forwarded-*` headers [Forum#67625](https://xcp-ng.org/forum/post/67625) (PR [#7233](https://github.com/vatesfr/xen-orchestra/pull/7233))
- [Backup]Use multiple link to speedup NBD backup (PR [#7216](https://github.com/vatesfr/xen-orchestra/pull/7216))
- [Backup] Show if disk is differential or full in incremental backups (PR [#7222](https://github.com/vatesfr/xen-orchestra/pull/7222))
- [VDI] Create XAPI task during NBD export (PR [#7228](https://github.com/vatesfr/xen-orchestra/pull/7228))
### Bug fixes
- [Backup] Reduce memory consumption when using NBD (PR [#7216](https://github.com/vatesfr/xen-orchestra/pull/7216))
> Users must be able to say: “I had this issue, happy to know it's fixed”
- [Backup/Restore] In case of snapshot with memory, create the suspend VDI on the correct SR instead of the default one
- [Import/ESXi] Handle `Cannot read properties of undefined (reading 'perDatastoreUsage')` error when importing VM without storage (PR [#7168](https://github.com/vatesfr/xen-orchestra/pull/7168))
- [Export/OVA] Handle export with resulting disk larger than 8.2GB (PR [#7183](https://github.com/vatesfr/xen-orchestra/pull/7183))
- [Self Service] Fix error displayed after adding a VM to a resource set (PR [#7144](https://github.com/vatesfr/xen-orchestra/pull/7144))
- [REST API] Returns a proper 404 _Not Found_ error when a job does not exist instead of _Internal Server Error_
- [Host/Smart reboot] Automatically retries up to a minute when `HOST_STILL_BOOTING` [#7194](https://github.com/vatesfr/xen-orchestra/issues/7194) (PR [#7231](https://github.com/vatesfr/xen-orchestra/pull/7231))
- [Plugin/transport-slack] Compatibility with other services like Mattermost or Discord [#7130](https://github.com/vatesfr/xen-orchestra/issues/7130) (PR [#7220](https://github.com/vatesfr/xen-orchestra/pull/7220))
- [Host/Network] Fix error "PIF_IS_PHYSICAL" when trying to remove a PIF that had already been physically disconnected [#7193](https://github.com/vatesfr/xen-orchestra/issues/7193) (PR [#7221](https://github.com/vatesfr/xen-orchestra/pull/7221))
- [Mirror backup] Fix backup reports not being sent (PR [#7235](https://github.com/vatesfr/xen-orchestra/pull/7235))
- [RPU] VMs are correctly migrated to their original host (PR [#7238](https://github.com/vatesfr/xen-orchestra/pull/7238))
### Packages to release
@@ -38,14 +47,14 @@
<!--packages-start-->
- @vates/nbd-client patch
- @xen-orchestra/backups minor
- @xen-orchestra/cr-seed-cli major
- @xen-orchestra/vmware-explorer patch
- xen-api major
- xo-server patch
- xo-server-netbox minor
- xo-vmdk-to-vhd patch
- xo-web minor
- @vates/nbd-client major
- @xen-orchestra/backups patch
- @xen-orchestra/xapi minor
- vhd-lib minor
- xo-server minor
- xo-server-auth-saml minor
- xo-server-transport-email major
- xo-server-transport-slack patch
- xo-web patch
<!--packages-end-->

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
docs/assets/nbd-enable.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@@ -64,3 +64,24 @@ For example, with a value of 2, the first two backups will be a key and a delta,
This is important because on rare occasions a backup can be corrupted, and in the case of incremetnal backups, this corruption might impact all the following backups in the chain. Occasionally performing a full backup limits how far a corrupted delta backup can propagate.
The value to use depends on your storage constraints and the frequency of your backups, but a value of 20 is a good start.
## NBD-enabled Backups
You have the option to utilize the NBD network protocol for data transfer instead of the VHD handler generated by the XAPI. NBD-enabled backups generally show improved speed as the load on the Dom0 is reduced.
To enable NBD on the pool network, select the relevant pool, and navigate to the Network tab to modify the parameter:
![](./assets/nbd-connection.png)
This will securely transfer encrypted data from the host to the XOA.
When creating or editing an incremental (previously known as delta) backup and replication for this pool in the future, you have the option to enable NBD in the Advanced settings:
![](./assets/nbd-enable.png)
After the job is completed, you can verify whether NBD was used for the transfer in the backup log:
![](./assets/nbd-backup-log.png)
It's important to note that NBD exports are not currently visible in the task list before 5.90 (december 2023).
To learn more about the evolution of this feature across various XO releases, check out our blog posts for versions [5.76](https://xen-orchestra.com/blog/xen-orchestra-5-76/), [5.81](https://xen-orchestra.comblog/xen-orchestra-5-81/), [5.82](https://xen-orchestra.com/blog/xen-orchestra-5-82/), and [5.86](https://xen-orchestra.com/blog/xen-orchestra-5-86/).

View File

@@ -106,13 +106,13 @@ XO needs the following packages to be installed. Redis is used as a database by
For example, on Debian/Ubuntu:
```sh
apt-get install build-essential redis-server libpng-dev git python3-minimal libvhdi-utils lvm2 cifs-utils nfs-common
apt-get install build-essential redis-server libpng-dev git python3-minimal libvhdi-utils lvm2 cifs-utils nfs-common ntfs-3g
```
On Fedora/CentOS like:
```sh
dnf install redis libpng-devel git libvhdi-tools lvm2 cifs-utils make automake gcc gcc-c++
dnf install redis libpng-devel git lvm2 cifs-utils make automake gcc gcc-c++ nfs-utils ntfs-3g
```
### Make sure Redis is running

View File

@@ -123,7 +123,7 @@ Content-Type: application/x-ndjson
## Properties update
> This feature is restricted to `name_label` and `name_description` at the moment.
> This feature is restricted to `name_label`, `name_description` and `tags` at the moment.
```sh
curl \
@@ -135,6 +135,30 @@ curl \
'https://xo.example.org/rest/v0/vms/770aa52a-fd42-8faf-f167-8c5c4a237cac'
```
### Collections
For collection properties, like `tags`, it can be more practical to touch a single item without impacting the others.
An item can be created with `PUT <collection>/<item id>` and can be destroyed with `DELETE <collection>/<item id>`.
Adding a tag:
```sh
curl \
-X PUT \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
'https://xo.example.org/rest/v0/vms/770aa52a-fd42-8faf-f167-8c5c4a237cac/tags/My%20tag'
```
Removing a tag:
```sh
curl \
-X DELETE \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
'https://xo.example.org/rest/v0/vms/770aa52a-fd42-8faf-f167-8c5c4a237cac/tags/My%20tag'
```
## VM and VDI destruction
For a VM:
@@ -175,9 +199,45 @@ curl \
> myDisk.vhd
```
## VM Import
A VM can be imported by posting to `/rest/v0/pools/:id/vms`.
```sh
curl \
-X POST \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
-T myDisk.raw \
'https://xo.example.org/rest/v0/pools/355ee47d-ff4c-4924-3db2-fd86ae629676/vms?sr=357bd56c-71f9-4b2a-83b8-3451dec04b8f' \
| cat
```
The `sr` query parameter can be used to specify on which SR the VM should be imported, if not specified, the default SR will be used.
> Note: the final `| cat` ensures cURL's standard output is not a TTY, which is necessary for upload stats to be dislayed.
## VDI Import
A VHD or a raw export can be imported on an SR to create a new VDI at `/rest/v0/srs/<sr uuid>/vdis`.
### Existing VDI
A VHD or a raw export can be imported in an existing VDI respectively at `/rest/v0/vdis/<uuid>.vhd` and `/rest/v0/vdis/<uuid>.raw`.
> Note: the size of the VDI must match exactly the size of VDI that was previously exported.
```sh
curl \
-X PUT \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
-T myDisk.vhd \
'https://xo.example.org/rest/v0/vdis/1a269782-ea93-4c4c-897a-475365f7b674.vhd' \
| cat
```
> Note: the final `| cat` ensures cURL's standard output is not a TTY, which is necessary for upload stats to be dislayed.
### New VDI
An export can also be imported on an SR to create a new VDI at `/rest/v0/srs/<sr uuid>/vdis`.
```sh
curl \

View File

@@ -23,7 +23,7 @@
"node": ">=10"
},
"dependencies": {
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/fs": "^4.1.3",
"cli-progress": "^3.1.0",
"exec-promise": "^0.7.0",
"getopts": "^2.2.3",
@@ -31,7 +31,7 @@
"lodash": "^4.17.21",
"promise-toolbox": "^0.21.0",
"uuid": "^9.0.0",
"vhd-lib": "^4.6.1"
"vhd-lib": "^4.7.0"
},
"scripts": {
"postversion": "npm publish",

View File

@@ -0,0 +1,184 @@
'use strict'
const { VhdAbstract, VhdNegative } = require('..')
const { describe, it } = require('test')
const assert = require('assert/strict')
const { unpackHeader, unpackFooter } = require('./_utils')
const { createHeader, createFooter } = require('../_createFooterHeader')
const _computeGeometryForSize = require('../_computeGeometryForSize')
const { FOOTER_SIZE, DISK_TYPES } = require('../_constants')
const VHD_BLOCK_LENGTH = 2 * 1024 * 1024
class VhdMock extends VhdAbstract {
#blockUsed
#header
#footer
get header() {
return this.#header
}
get footer() {
return this.#footer
}
constructor(header, footer, blockUsed = new Set()) {
super()
this.#header = header
this.#footer = footer
this.#blockUsed = blockUsed
}
containsBlock(blockId) {
return this.#blockUsed.has(blockId)
}
readBlock(blockId, onlyBitmap = false) {
const bitmap = Buffer.alloc(512, 255) // bitmap are full of bit 1
const data = Buffer.alloc(2 * 1024 * 1024, 0) // empty are full of bit 0
data.writeUint8(blockId)
return {
id: blockId,
bitmap,
data,
buffer: Buffer.concat([bitmap, data]),
}
}
readBlockAllocationTable() {}
readHeaderAndFooter() {}
_readParentLocatorData(id) {}
}
describe('vhd negative', async () => {
it(`throws when uid aren't chained `, () => {
const length = 10e8
let header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
const geometry = _computeGeometryForSize(length)
let footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const parent = new VhdMock(header, footer)
header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const child = new VhdMock(header, footer)
assert.throws(() => new VhdNegative(parent, child), { message: 'NOT_CHAINED' })
})
it('throws when size changed', () => {
const childLength = 10e8
const parentLength = 10e8 + 1
let header = unpackHeader(createHeader(parentLength / VHD_BLOCK_LENGTH))
let geometry = _computeGeometryForSize(parentLength)
let footer = unpackFooter(
createFooter(parentLength, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const parent = new VhdMock(header, footer)
header = unpackHeader(createHeader(childLength / VHD_BLOCK_LENGTH))
geometry = _computeGeometryForSize(childLength)
header.parentUuid = footer.uuid
footer = unpackFooter(
createFooter(childLength, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const child = new VhdMock(header, footer)
assert.throws(() => new VhdNegative(parent, child), { message: 'GEOMETRY_CHANGED' })
})
it('throws when child is not differencing', () => {
const length = 10e8
let header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
const geometry = _computeGeometryForSize(length)
let footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const parent = new VhdMock(header, footer)
header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
header.parentUuid = footer.uuid
footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DYNAMIC)
)
const child = new VhdMock(header, footer)
assert.throws(() => new VhdNegative(parent, child), { message: 'CHILD_NOT_DIFFERENCING' })
})
it(`throws when writing into vhd negative `, async () => {
const length = 10e8
let header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
const geometry = _computeGeometryForSize(length)
let footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const parent = new VhdMock(header, footer)
const parentUuid = footer.uuid
header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
header.parentUuid = parentUuid
footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const child = new VhdMock(header, footer)
const vhd = new VhdNegative(parent, child)
// await assert.rejects( ()=> vhd.writeFooter())
assert.throws(() => vhd.writeHeader())
assert.throws(() => vhd.writeBlockAllocationTable())
assert.throws(() => vhd.writeEntireBlock())
assert.throws(() => vhd.mergeBlock(), { message: `can't coalesce block into a vhd negative` })
})
it('normal case', async () => {
const length = 10e8
let header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
let geometry = _computeGeometryForSize(length)
let footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DYNAMIC)
)
const parent = new VhdMock(header, footer, new Set([1, 3]))
const parentUuid = footer.uuid
header = unpackHeader(createHeader(length / VHD_BLOCK_LENGTH))
header.parentUuid = parentUuid
geometry = _computeGeometryForSize(length)
footer = unpackFooter(
createFooter(length, Math.floor(Date.now() / 1000), geometry, FOOTER_SIZE, DISK_TYPES.DIFFERENCING)
)
const childUuid = footer.uuid
const child = new VhdMock(header, footer, new Set([2, 3]))
const vhd = new VhdNegative(parent, child)
assert.equal(vhd.header.parentUuid.equals(childUuid), true)
assert.equal(vhd.footer.diskType, DISK_TYPES.DIFFERENCING)
await vhd.readBlockAllocationTable()
await vhd.readHeaderAndFooter()
await vhd.readParentLocator(0)
assert.equal(vhd.header.parentUuid, childUuid)
assert.equal(vhd.footer.diskType, DISK_TYPES.DIFFERENCING)
assert.equal(vhd.containsBlock(1), false)
assert.equal(vhd.containsBlock(2), true)
assert.equal(vhd.containsBlock(3), true)
assert.equal(vhd.containsBlock(4), false)
const expected = [0, 1, 0, 3, 0]
const expectedBitmap = Buffer.alloc(512, 255) // bitmap must always be full of bit 1
for (let index = 0; index < 5; index++) {
if (vhd.containsBlock(index)) {
const { id, data, bitmap } = await vhd.readBlock(index)
assert.equal(index, id)
assert.equal(expectedBitmap.equals(bitmap), true)
assert.equal(data.readUInt8(0), expected[index])
} else {
assert.equal([2, 3].includes(index), false)
}
}
})
})

View File

@@ -0,0 +1,84 @@
'use strict'
const UUID = require('uuid')
const { DISK_TYPES } = require('../_constants')
const { VhdAbstract } = require('./VhdAbstract')
const { computeBlockBitmapSize } = require('./_utils')
const assert = require('node:assert')
/**
* Build an incremental VHD which can be applied to a child to revert to the state of its parent.
* @param {*} parent
* @param {*} descendant
*/
class VhdNegative extends VhdAbstract {
#parent
#child
get header() {
// we want to have parent => child => negative
// where => means " is the parent of "
return {
...this.#parent.header,
parentUuid: this.#child.footer.uuid,
}
}
get footer() {
// by construct a negative vhd is differencing disk
return {
...this.#parent.footer,
diskType: DISK_TYPES.DIFFERENCING,
}
}
constructor(parent, child) {
super()
this.#parent = parent
this.#child = child
assert.strictEqual(UUID.stringify(child.header.parentUuid), UUID.stringify(parent.footer.uuid), 'NOT_CHAINED')
assert.strictEqual(child.footer.diskType, DISK_TYPES.DIFFERENCING, 'CHILD_NOT_DIFFERENCING')
// we don't want to handle alignment and missing block for now
// last block may contains partly empty data when changing size
assert.strictEqual(child.footer.currentSize, parent.footer.currentSize, 'GEOMETRY_CHANGED')
}
async readBlockAllocationTable() {
return Promise.all([this.#parent.readBlockAllocationTable(), this.#child.readBlockAllocationTable()])
}
containsBlock(blockId) {
return this.#child.containsBlock(blockId)
}
async readHeaderAndFooter() {
return Promise.all([this.#parent.readHeaderAndFooter(), this.#child.readHeaderAndFooter()])
}
async readBlock(blockId, onlyBitmap = false) {
// only read the content of the first vhd containing this block
if (this.#parent.containsBlock(blockId)) {
return this.#parent.readBlock(blockId, onlyBitmap)
}
const bitmap = Buffer.alloc(computeBlockBitmapSize(this.header.blockSize), 255) // bitmap are full of bit 1
const data = Buffer.alloc(this.header.blockSize, 0) // empty are full of bit 0
return {
id: blockId,
bitmap,
data,
buffer: Buffer.concat([bitmap, data]),
}
}
mergeBlock(child, blockId) {
throw new Error(`can't coalesce block into a vhd negative`)
}
_readParentLocatorData(id) {
return this.#parent._readParentLocatorData(id)
}
}
exports.VhdNegative = VhdNegative

View File

@@ -120,7 +120,8 @@ const VhdSynthetic = class VhdSynthetic extends VhdAbstract {
}
// add decorated static method
VhdSynthetic.fromVhdChain = Disposable.factory(async function* fromVhdChain(handler, childPath) {
// until is not included in the result , the chain will stop at its child
VhdSynthetic.fromVhdChain = Disposable.factory(async function* fromVhdChain(handler, childPath, { until } = {}) {
let vhdPath = childPath
let vhd
const vhds = []
@@ -128,8 +129,11 @@ VhdSynthetic.fromVhdChain = Disposable.factory(async function* fromVhdChain(hand
vhd = yield openVhd(handler, vhdPath)
vhds.unshift(vhd) // from oldest to most recent
vhdPath = resolveRelativeFromFile(vhdPath, vhd.header.parentUnicodeName)
} while (vhd.footer.diskType !== DISK_TYPES.DYNAMIC)
} while (vhd.footer.diskType !== DISK_TYPES.DYNAMIC && vhdPath !== until)
if (until !== undefined && vhdPath !== until) {
throw new Error(`Didn't find ${until} as a parent of ${childPath}`)
}
const synthetic = new VhdSynthetic(vhds)
await synthetic.readHeaderAndFooter()
yield synthetic

View File

@@ -2,6 +2,10 @@
const { dirname, resolve } = require('path')
const resolveRelativeFromFile = (file, path) => resolve('/', dirname(file), path).slice(1)
const resolveRelativeFromFile = (file, path) => {
if (file.startsWith('/')) {
return resolve(dirname(file), path)
}
return resolve('/', dirname(file), path).slice(1)
}
module.exports = resolveRelativeFromFile

View File

@@ -1,6 +1,6 @@
'use strict'
const { finished, Readable } = require('node:stream')
const { readChunkStrict, skipStrict } = require('@vates/read-chunk')
const { Readable } = require('node:stream')
const { unpackHeader } = require('./Vhd/_utils')
const {
FOOTER_SIZE,
@@ -14,18 +14,38 @@ const {
const { fuHeader, checksumStruct } = require('./_structs')
const assert = require('node:assert')
exports.createNbdRawStream = async function createRawStream(nbdClient) {
const stream = Readable.from(nbdClient.readBlocks())
const MAX_DURATION_BETWEEN_PROGRESS_EMIT = 5e3
const MIN_TRESHOLD_PERCENT_BETWEEN_PROGRESS_EMIT = 1
exports.createNbdRawStream = function createRawStream(nbdClient) {
const exportSize = Number(nbdClient.exportSize)
const chunkSize = 2 * 1024 * 1024
const indexGenerator = function* () {
const nbBlocks = Math.ceil(exportSize / chunkSize)
for (let index = 0; index < nbBlocks; index++) {
yield { index, size: chunkSize }
}
}
const stream = Readable.from(nbdClient.readBlocks(indexGenerator), { objectMode: false })
stream.on('error', () => nbdClient.disconnect())
stream.on('end', () => nbdClient.disconnect())
return stream
}
exports.createNbdVhdStream = async function createVhdStream(nbdClient, sourceStream) {
exports.createNbdVhdStream = async function createVhdStream(
nbdClient,
sourceStream,
{
maxDurationBetweenProgressEmit = MAX_DURATION_BETWEEN_PROGRESS_EMIT,
minTresholdPercentBetweenProgressEmit = MIN_TRESHOLD_PERCENT_BETWEEN_PROGRESS_EMIT,
} = {}
) {
const bufFooter = await readChunkStrict(sourceStream, FOOTER_SIZE)
const header = unpackHeader(await readChunkStrict(sourceStream, HEADER_SIZE))
header.tableOffset = FOOTER_SIZE + HEADER_SIZE
// compute BAT in order
const batSize = Math.ceil((header.maxTableEntries * 4) / SECTOR_SIZE) * SECTOR_SIZE
await skipStrict(sourceStream, header.tableOffset - (FOOTER_SIZE + HEADER_SIZE))
@@ -47,7 +67,6 @@ exports.createNbdVhdStream = async function createVhdStream(nbdClient, sourceStr
precLocator = offset
}
}
header.tableOffset = FOOTER_SIZE + HEADER_SIZE
const rawHeader = fuHeader.pack(header)
checksumStruct(rawHeader, fuHeader)
@@ -69,10 +88,35 @@ exports.createNbdVhdStream = async function createVhdStream(nbdClient, sourceStr
}
}
const totalLength = (offsetSector + blockSizeInSectors + 1) /* end footer */ * SECTOR_SIZE
let lengthRead = 0
let lastUpdate = 0
let lastLengthRead = 0
function throttleEmitProgress() {
const now = Date.now()
if (
lengthRead - lastLengthRead > (minTresholdPercentBetweenProgressEmit / 100) * totalLength ||
(now - lastUpdate > maxDurationBetweenProgressEmit && lengthRead !== lastLengthRead)
) {
stream.emit('progress', lengthRead / totalLength)
lastUpdate = now
lastLengthRead = lengthRead
}
}
function trackAndGet(buffer) {
lengthRead += buffer.length
throttleEmitProgress()
return buffer
}
async function* iterator() {
yield bufFooter
yield rawHeader
yield bat
yield trackAndGet(bufFooter)
yield trackAndGet(rawHeader)
yield trackAndGet(bat)
let precBlocOffset = FOOTER_SIZE + HEADER_SIZE + batSize
for (let i = 0; i < PARENT_LOCATOR_ENTRIES; i++) {
@@ -82,7 +126,7 @@ exports.createNbdVhdStream = async function createVhdStream(nbdClient, sourceStr
await skipStrict(sourceStream, parentLocatorOffset - precBlocOffset)
const data = await readChunkStrict(sourceStream, space)
precBlocOffset = parentLocatorOffset + space
yield data
yield trackAndGet(data)
}
}
@@ -97,16 +141,20 @@ exports.createNbdVhdStream = async function createVhdStream(nbdClient, sourceStr
})
const bitmap = Buffer.alloc(SECTOR_SIZE, 255)
for await (const block of nbdIterator) {
yield bitmap // don't forget the bitmap before the block
yield block
yield trackAndGet(bitmap) // don't forget the bitmap before the block
yield trackAndGet(block)
}
yield bufFooter
yield trackAndGet(bufFooter)
}
const stream = Readable.from(iterator())
stream.length = (offsetSector + blockSizeInSectors + 1) /* end footer */ * SECTOR_SIZE
const stream = Readable.from(iterator(), { objectMode: false })
stream.length = totalLength
stream._nbd = true
stream.on('error', () => nbdClient.disconnect())
stream.on('end', () => nbdClient.disconnect())
finished(stream, () => {
clearInterval(interval)
nbdClient.disconnect()
})
const interval = setInterval(throttleEmitProgress, maxDurationBetweenProgressEmit)
return stream
}

View File

@@ -13,4 +13,5 @@ exports.VhdAbstract = require('./Vhd/VhdAbstract').VhdAbstract
exports.VhdDirectory = require('./Vhd/VhdDirectory').VhdDirectory
exports.VhdFile = require('./Vhd/VhdFile').VhdFile
exports.VhdSynthetic = require('./Vhd/VhdSynthetic').VhdSynthetic
exports.VhdNegative = require('./Vhd/VhdNegative').VhdNegative
exports.Constants = require('./_constants')

View File

@@ -223,6 +223,15 @@ class Merger {
)
// ensure data size is correct
await this.#writeState()
try {
// vhd file
this.#state.vhdSize = await parentVhd.getSize()
} catch (err) {
// vhd directory
// parentVhd has its bat already loaded
this.#state.vhdSize = parentVhd.streamSize()
}
this.#onProgress({ total: nBlocks, done: nBlocks })
}
@@ -294,9 +303,10 @@ class Merger {
}
async #cleanup() {
const mergedSize = this.#state?.mergedDataSize ?? 0
const finalVhdSize = this.#state?.vhdSize ?? 0
const mergedDataSize = this.#state?.mergedDataSize ?? 0
await this.#handler.unlink(this.#statePath).catch(warn)
return mergedSize
return { mergedDataSize, finalVhdSize}
}
}

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "vhd-lib",
"version": "4.6.1",
"version": "4.7.0",
"license": "AGPL-3.0-or-later",
"description": "Primitives for VHD file handling",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/packages/vhd-lib",
@@ -20,7 +20,7 @@
"@vates/read-chunk": "^1.2.0",
"@vates/stream-reader": "^0.1.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/fs": "^4.1.3",
"@xen-orchestra/log": "^0.6.0",
"async-iterator-to-stream": "^1.0.2",
"decorator-synchronized": "^0.6.0",
@@ -33,7 +33,7 @@
"uuid": "^9.0.0"
},
"devDependencies": {
"@xen-orchestra/fs": "^4.1.2",
"@xen-orchestra/fs": "^4.1.3",
"execa": "^5.0.0",
"get-stream": "^6.0.0",
"rimraf": "^5.0.1",

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "xapi-explore-sr",
"version": "0.4.1",
"version": "0.4.2",
"license": "ISC",
"description": "Display the list of VDIs (unmanaged and snapshots included) of a SR",
"keywords": [
@@ -40,7 +40,7 @@
"human-format": "^1.0.0",
"lodash": "^4.17.4",
"pw": "^0.0.4",
"xen-api": "^1.3.6"
"xen-api": "^2.0.0"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -1,111 +0,0 @@
```js
import { Xapi } from 'xen-api'
// bare-bones XAPI client
const xapi = new Xapi({
// URL to a host belonging to the XCP-ng/XenServer pool we want to connect to
url: 'https://xen1.company.net',
// credentials used to connect to this XAPI
auth: {
user: 'root',
password: 'important secret password',
},
// if true, only side-effects free calls will be allowed
readOnly: false,
})
// ensure that the connection is working
await xapi.checkConnection()
// call a XAPI method
//
// see available methods there: https://xapi-project.github.io/xen-api/
const result = await xapi.call(
// name of the method
'VM.snapshot',
// list of params
[vm.$ref, 'My new snapshot'],
// options
{
// AbortSignal that can be used to stop the call
//
// Note: this will not stop/rollback the side-effects of the call
signal,
}
)
// after a call (or checkConnection) has succeed, the following properties are available
// list of classes available on this XAPI
xapi.classes
// timestamp of the last reply from XAPI
xapi.lastReply
// pool record of this XAPI
xapi.pool
// secret identifier of the current session
//
// it might become obsolete, in that case, it will be automatically renewed by the next call
xapi.sessionId
// invalidate the session identifier
await xapi.logOut()
```
```js
import { Proxy } from 'xen-api/proxy'
const proxy = new Proxy(xapi)
await proxy.VM.snapshot()
```
```js
import { Events } from 'xen-api/events'
const events = new Events(xapi)
// ensure that all events until now have been received and processed
await events.barrier()
// watch events on tasks and wait for a task to finish
const task = await events.waitTask(taskRef, { signal })
// for long running actions, it's better to use an async call which will are based on tasks
const result = await events.asyncCall(method)
const stop = events.watch(
// class that we are interested in
//
// use `*` for all classes
'pool',
// called each time a new event for this class has been received
//
// https://xapi-project.github.io/xen-api/classes/event.html
event => {
stop()
}
)
// when wanting to really stop watching all events, simply remove all watchers
events.clear()
```
```js
import { Cache } from 'xen-api/events'
const cache = new Cache(watcher)
const host = await cache.get('host', 'OpaqueRef:1c3f19c8-f80a-464d-9c48-a2c19d4e4fc3')
const vm = await cache.getByUuid('VM', '355ee47d-ff4c-4924-3db2-fd86ae629676')
cache.clear()
```

32
packages/xen-api/_Ref.mjs Normal file
View File

@@ -0,0 +1,32 @@
const EMPTY = 'OpaqueRef:NULL'
const PREFIX = 'OpaqueRef:'
export default {
// Reference to use to indicate it's not pointing to an object
EMPTY,
// Whether this value is a reference (probably) pointing to an object
isNotEmpty(val) {
return val !== EMPTY && typeof val === 'string' && val.startsWith(PREFIX)
},
// Whether this value looks like a reference
is(val) {
return (
typeof val === 'string' &&
(val.startsWith(PREFIX) ||
// 2019-02-07 - JFT: even if `value` should not be an empty string for
// a ref property, an user had the case on XenServer 7.0 on the CD VBD
// of a VM created by XenCenter
val === '' ||
// 2021-03-08 - JFT: there is an bug in XCP-ng/XenServer which leads to
// some refs to be `Ref:*` instead of being rewritten
//
// We'll consider them as empty refs in this lib to avoid issues with
// _wrapRecord.
//
// See https://github.com/xapi-project/xen-api/issues/4338
val.startsWith('Ref:'))
)
},
}

View File

@@ -0,0 +1,30 @@
import { BaseError } from 'make-error'
export default class XapiError extends BaseError {
static wrap(error) {
let code, params
if (Array.isArray(error)) {
// < XenServer 7.3
;[code, ...params] = error
} else {
code = error.message
params = error.data
if (!Array.isArray(params)) {
params = []
}
}
return new XapiError(code, params)
}
constructor(code, params) {
super(`${code}(${params.join(', ')})`)
this.code = code
this.params = params
// slots than can be assigned later
this.call = undefined
this.url = undefined
this.task = undefined
}
}

View File

@@ -0,0 +1,3 @@
import debug from 'debug'
export default debug('xen-api')

View File

@@ -0,0 +1,22 @@
import { Cancel } from 'promise-toolbox'
import XapiError from './_XapiError.mjs'
export default task => {
const { status } = task
if (status === 'cancelled') {
return Promise.reject(new Cancel('task canceled'))
}
if (status === 'failure') {
const error = XapiError.wrap(task.error_info)
error.task = task
return Promise.reject(error)
}
if (status === 'success') {
// the result might be:
// - empty string
// - an opaque reference
// - an XML-RPC value
return Promise.resolve(task.result)
}
}

View File

@@ -0,0 +1,3 @@
const SUFFIX = '.get_all_records'
export default method => method.endsWith(SUFFIX)

View File

@@ -0,0 +1,6 @@
const RE = /^[^.]+\.get_/
export default function isReadOnlyCall(method, args) {
const n = args.length
return (n === 0 || (n === 1 && typeof args[0] === 'string')) && RE.test(method)
}

View File

@@ -0,0 +1,8 @@
export default (setting, defaultValue) =>
setting === undefined
? () => defaultValue
: typeof setting === 'function'
? setting
: typeof setting === 'object'
? method => setting[method] ?? setting['*'] ?? defaultValue
: () => setting

View File

@@ -0,0 +1,26 @@
const URL_RE = /^(?:(https?:)\/*)?(?:(([^:]*)(?::([^@]*))?)@)?(\[[^\]]+\]|[^:/]+)(?::([0-9]+))?(\/[^?#]*)?$/
export default url => {
const matches = URL_RE.exec(url)
if (matches === null) {
throw new Error('invalid URL: ' + url)
}
const [, protocol = 'https:', auth, username = '', password = '', hostname, port, pathname = '/'] = matches
const parsedUrl = {
protocol,
hostname,
port,
pathname,
// compat with url.parse
auth,
}
if (username !== '') {
parsedUrl.username = decodeURIComponent(username)
}
if (password !== '') {
parsedUrl.password = decodeURIComponent(password)
}
return parsedUrl
}

View File

@@ -0,0 +1,50 @@
import t from 'tap'
import parseUrl from './_parseUrl.mjs'
const data = {
'xcp.company.lan': {
hostname: 'xcp.company.lan',
pathname: '/',
protocol: 'https:',
},
'[::1]': {
hostname: '[::1]',
pathname: '/',
protocol: 'https:',
},
'http://username:password@xcp.company.lan': {
auth: 'username:password',
hostname: 'xcp.company.lan',
password: 'password',
pathname: '/',
protocol: 'http:',
username: 'username',
},
'https://username@xcp.company.lan': {
auth: 'username',
hostname: 'xcp.company.lan',
pathname: '/',
protocol: 'https:',
username: 'username',
},
}
t.test('invalid url', function (t) {
t.throws(() => parseUrl(''))
t.end()
})
for (const url of Object.keys(data)) {
t.test(url, function (t) {
const parsed = parseUrl(url)
for (const key of Object.keys(parsed)) {
if (parsed[key] === undefined) {
delete parsed[key]
}
}
t.same(parsed, data[url])
t.end()
})
}

View File

@@ -0,0 +1,17 @@
import mapValues from 'lodash/mapValues.js'
export default function replaceSensitiveValues(value, replacement) {
function helper(value, name) {
if (name === 'password' && typeof value === 'string') {
return replacement
}
if (typeof value !== 'object' || value === null) {
return value
}
return Array.isArray(value) ? value.map(helper) : mapValues(value, helper)
}
return helper(value)
}

View File

@@ -0,0 +1,130 @@
/* eslint-disable no-console */
import blocked from 'blocked'
import createDebug from 'debug'
import filter from 'lodash/filter.js'
import find from 'lodash/find.js'
import L from 'lodash'
import minimist from 'minimist'
import pw from 'pw'
import { asCallback, fromCallback, fromEvent } from 'promise-toolbox'
import { diff } from 'jest-diff'
import { getBoundPropertyDescriptor } from 'bind-property-descriptor'
import { start as createRepl } from 'repl'
// ===================================================================
function askPassword(prompt = 'Password: ') {
if (prompt) {
process.stdout.write(prompt)
}
return new Promise(resolve => {
pw(resolve)
})
}
const { getPrototypeOf, ownKeys } = Reflect
function getAllBoundDescriptors(object) {
const descriptors = { __proto__: null }
let current = object
do {
ownKeys(current).forEach(key => {
if (!(key in descriptors)) {
descriptors[key] = getBoundPropertyDescriptor(current, key, object)
}
})
} while ((current = getPrototypeOf(current)) !== null)
return descriptors
}
// ===================================================================
const usage = 'Usage: xen-api <url> [<user> [<password>]]'
export async function main(createClient) {
const opts = minimist(process.argv.slice(2), {
string: ['proxy', 'session-id', 'transport'],
boolean: ['allow-unauthorized', 'help', 'read-only', 'verbose'],
alias: {
'allow-unauthorized': 'au',
debounce: 'd',
help: 'h',
proxy: 'p',
'read-only': 'ro',
verbose: 'v',
transport: 't',
},
})
if (opts.help) {
return usage
}
if (opts.verbose) {
// Does not work perfectly.
//
// https://github.com/visionmedia/debug/pull/156
createDebug.enable('xen-api,xen-api:*')
}
let auth
if (opts._.length > 1) {
const [, user, password = await askPassword()] = opts._
auth = { user, password }
} else if (opts['session-id'] !== undefined) {
auth = { sessionId: opts['session-id'] }
}
{
const debug = createDebug('xen-api:perf')
blocked(ms => {
debug('blocked for %sms', ms | 0)
})
}
const xapi = createClient({
url: opts._[0],
allowUnauthorized: opts.au,
auth,
debounce: opts.debounce != null ? +opts.debounce : null,
httpProxy: opts.proxy,
readOnly: opts.ro,
syncStackTraces: true,
transport: opts.transport || undefined,
})
await xapi.connect()
const repl = createRepl({
prompt: `${xapi._humanId}> `,
})
{
const ctx = repl.context
ctx.xapi = xapi
ctx.diff = (a, b) => console.log('%s', diff(a, b))
ctx.find = predicate => find(xapi.objects.all, predicate)
ctx.findAll = predicate => filter(xapi.objects.all, predicate)
ctx.L = L
Object.defineProperties(ctx, getAllBoundDescriptors(xapi))
}
// Make the REPL waits for promise completion.
repl.eval = (evaluate => (cmd, context, filename, cb) => {
asCallback.call(
fromCallback(cb => {
evaluate.call(repl, cmd, context, filename, cb)
}).then(value => (Array.isArray(value) ? Promise.all(value) : value)),
cb
)
})(repl.eval)
await fromEvent(repl, 'exit')
try {
await xapi.disconnect()
} catch (error) {}
}
/* eslint-enable no-console */

6
packages/xen-api/cli.mjs Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env node
import { createClient } from './index.mjs'
import { main } from './cli-lib.mjs'
main(createClient).catch(console.error.bind(console, 'FATAL'))

View File

@@ -1,115 +0,0 @@
const EVENT_TIMEOUT = 60e3
export class Watcher {
#abortController
#typeWatchers = new Map()
classes = new Map()
xapi
constructor(xapi) {
this.xapi = xapi
}
async asyncCall(method, params, { signal }) {
const taskRef = await this.xapi.call('Async.' + method, params, { signal })
return new Promise((resolve, reject) => {
const stop = this.watch(
'task',
taskRef,
task => {
const { status } = task
if (status === 'success') {
stop()
resolve(task.status)
} else if (status === 'cancelled' || status === 'failure') {
stop()
reject(task.error_info)
}
},
{ signal }
)
})
}
async #start() {
const { xapi } = this
const { signal } = this.#abortController
const watchers = this.#typeWatchers
let token = await xapi.call('event.inject', 'pool', xapi.pool.$ref)
while (true) {
signal.throwIfRequested()
const result = await xapi.call({ signal }, 'event.from', this.classes, token, EVENT_TIMEOUT)
for (const event of result.events) {
}
}
this.#abortController = undefined
}
start() {
if (this.#abortController !== undefined) {
throw new Error('already started')
}
this.#abortController = new AbortController()
this.#start()
}
stop() {
if (this.#abortController === undefined) {
throw new Error('already stopped')
}
this.#abortController.abort()
}
}
export class Cache {
// contains records indexed by type + ref
//
// plain records when retrieved by events
//
// promises to record when retrieved by a get_record call (might be a rejection if the record does not exist)
#recordCache = new Map()
#watcher
constructor(watcher) {
this.#watcher = watcher
}
async #get(type, ref) {
let record
try {
record = await this.#watcher.xapi.call(`${type}.get_record`, ref)
} catch (error) {
if (error.code !== 'HANDLE_INVALID') {
throw error
}
record = Promise.reject(error)
}
this.#recordCache.set(type, Promise.resolve(record))
return record
}
async get(type, ref) {
const cache = this.#recordCache
const key = type + ref
let record = cache.get(key)
if (record === undefined) {
record = this.#get(type, ref)
cache.set(key, record)
}
return record
}
async getByUuid(type, uuid) {
return this.get(type, await this.#watcher.xapi.call(`${type}.get_by_uuid`, uuid))
}
}
exports.Cache = Cache

View File

@@ -0,0 +1,5 @@
'use strict'
module.exports = {
ignorePatterns: ['*'],
}

View File

@@ -0,0 +1,3 @@
if (process.env.DEBUG === undefined) {
process.env.DEBUG = 'xen-api'
}

View File

@@ -0,0 +1,67 @@
#!/usr/bin/env node
import './env.mjs'
import createProgress from 'progress-stream'
import createTop from 'process-top'
import getopts from 'getopts'
import { defer } from 'golike-defer'
import { CancelToken } from 'promise-toolbox'
import { createClient } from '../index.mjs'
import { createOutputStream, formatProgress, pipeline, resolveRecord, throttle } from './utils.mjs'
defer(async ($defer, rawArgs) => {
const {
raw,
throttle: bps,
_: args,
} = getopts(rawArgs, {
boolean: 'raw',
alias: {
raw: 'r',
throttle: 't',
},
})
if (args.length < 2) {
return console.log('Usage: export-vdi [--raw] <XS URL> <VDI identifier> [<VHD file>]')
}
const xapi = createClient({
allowUnauthorized: true,
url: args[0],
watchEvents: false,
})
await xapi.connect()
$defer(() => xapi.disconnect())
const { cancel, token } = CancelToken.source()
process.on('SIGINT', cancel)
const vdi = await resolveRecord(xapi, 'VDI', args[1])
// https://xapi-project.github.io/xen-api/snapshots.html#downloading-a-disk-or-snapshot
const exportStream = await xapi.getResource(token, '/export_raw_vdi/', {
query: {
format: raw ? 'raw' : 'vhd',
vdi: vdi.$ref,
},
})
console.warn('Export task:', exportStream.headers['task-id'])
const top = createTop()
const progressStream = createProgress()
$defer(
clearInterval,
setInterval(() => {
console.warn('\r %s | %s', top.toString(), formatProgress(progressStream.progress()))
}, 1e3)
)
await pipeline(exportStream, progressStream, throttle(bps), createOutputStream(args[2]))
})(process.argv.slice(2)).catch(console.error.bind(console, 'error'))

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env node
import './env.mjs'
import createProgress from 'progress-stream'
import getopts from 'getopts'
import { defer } from 'golike-defer'
import { CancelToken } from 'promise-toolbox'
import { createClient } from '../index.mjs'
import { createOutputStream, formatProgress, pipeline, resolveRecord } from './utils.mjs'
defer(async ($defer, rawArgs) => {
const {
gzip,
zstd,
_: args,
} = getopts(rawArgs, {
boolean: ['gzip', 'zstd'],
})
if (args.length < 2) {
return console.log('Usage: export-vm <XS URL> <VM identifier> [<XVA file>]')
}
const xapi = createClient({
allowUnauthorized: true,
url: args[0],
watchEvents: false,
})
await xapi.connect()
$defer(() => xapi.disconnect())
const { cancel, token } = CancelToken.source()
process.on('SIGINT', cancel)
// https://xapi-project.github.io/xen-api/importexport.html
const exportStream = await xapi.getResource(token, '/export/', {
query: {
ref: (await resolveRecord(xapi, 'VM', args[1])).$ref,
use_compression: zstd ? 'zstd' : gzip ? 'true' : 'false',
},
})
console.warn('Export task:', exportStream.headers['task-id'])
await pipeline(
exportStream,
createProgress({ time: 1e3 }, p => console.warn(formatProgress(p))),
createOutputStream(args[2])
)
})(process.argv.slice(2)).catch(console.error.bind(console, 'error'))

View File

@@ -0,0 +1,88 @@
#!/usr/bin/env node
import './env.mjs'
import getopts from 'getopts'
import { defer } from 'golike-defer'
import { CancelToken } from 'promise-toolbox'
import { createVhdStreamWithLength } from 'vhd-lib'
import { createClient } from '../index.mjs'
import { createInputStream, resolveRef } from './utils.mjs'
defer(async ($defer, argv) => {
const opts = getopts(argv, { boolean: ['events', 'raw', 'remove-length'], string: ['sr', 'vdi'] })
const url = opts._[0]
if (url === undefined) {
return console.log(
'Usage: import-vdi [--events] [--raw] [--sr <SR identifier>] [--vdi <VDI identifier>] <XS URL> [<VHD file>]'
)
}
const { raw, sr, vdi } = opts
const createVdi = vdi === ''
if (createVdi) {
if (sr === '') {
throw 'requires either --vdi or --sr'
}
if (!raw) {
throw 'creating a VDI requires --raw'
}
} else if (sr !== '') {
throw '--vdi and --sr are mutually exclusive'
}
const xapi = createClient({
allowUnauthorized: true,
url,
watchEvents: opts.events && ['task'],
})
await xapi.connect()
$defer(() => xapi.disconnect())
const { cancel, token } = CancelToken.source()
process.on('SIGINT', cancel)
let input = createInputStream(opts._[1])
$defer.onFailure(() => input.destroy())
let vdiRef
if (createVdi) {
vdiRef = await xapi.call('VDI.create', {
name_label: 'xen-api/import-vdi',
other_config: {},
read_only: false,
sharable: false,
SR: await resolveRef(xapi, 'SR', sr),
type: 'user',
virtual_size: input.length,
})
$defer.onFailure(() => xapi.call('VDI.destroy', vdiRef))
} else {
vdiRef = await resolveRef(xapi, 'VDI', vdi)
}
if (opts['remove-length']) {
delete input.length
console.log('length removed')
} else if (!raw && input.length === undefined) {
input = await createVhdStreamWithLength(input)
}
// https://xapi-project.github.io/xen-api/snapshots.html#uploading-a-disk-or-snapshot
const result = await xapi.putResource(token, input, '/import_raw_vdi/', {
query: {
format: raw ? 'raw' : 'vhd',
vdi: vdiRef,
},
})
if (result !== undefined) {
console.log(result)
}
})(process.argv.slice(2)).catch(console.error.bind(console, 'Fatal:'))

View File

@@ -0,0 +1,33 @@
#!/usr/bin/env node
import './env.mjs'
import { defer } from 'golike-defer'
import { CancelToken } from 'promise-toolbox'
import { createClient } from '../index.mjs'
import { createInputStream, resolveRef } from './utils.mjs'
defer(async ($defer, args) => {
if (args.length < 1) {
return console.log('Usage: import-vm <XS URL> [<XVA file>] [<SR identifier>]')
}
const xapi = createClient({
allowUnauthorized: true,
url: args[0],
watchEvents: false,
})
await xapi.connect()
$defer(() => xapi.disconnect())
const { cancel, token } = CancelToken.source()
process.on('SIGINT', cancel)
// https://xapi-project.github.io/xen-api/importexport.html
await xapi.putResource(token, createInputStream(args[1]), '/import/', {
query: args[2] && { sr_id: await resolveRef(xapi, 'SR', args[2]) },
})
})(process.argv.slice(2)).catch(console.error.bind(console, 'error'))

View File

@@ -0,0 +1,59 @@
#!/usr/bin/env node
import 'source-map-support/register.js'
import forEach from 'lodash/forEach.js'
import size from 'lodash/size.js'
import { createClient } from '../index.mjs'
// ===================================================================
if (process.argv.length < 3) {
throw new Error('Usage: log-events <XS URL>')
}
// ===================================================================
// Creation
const xapi = createClient({
allowUnauthorized: true,
url: process.argv[2],
})
// ===================================================================
// Method call
xapi.connect().then(() => {
xapi
.call('VM.get_all_records')
.then(function (vms) {
console.log('%s VMs fetched', size(vms))
})
.catch(function (error) {
console.error(error)
})
})
// ===================================================================
// Objects
const objects = xapi.objects
objects.on('add', objects => {
forEach(objects, object => {
console.log('+ %s: %s', object.$type, object.$id)
})
})
objects.on('update', objects => {
forEach(objects, object => {
console.log('± %s: %s', object.$type, object.$id)
})
})
objects.on('remove', objects => {
forEach(objects, (value, id) => {
console.log('- %s', id)
})
})

2647
packages/xen-api/examples/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,15 @@
{
"dependencies": {
"getopts": "^2.2.3",
"golike-defer": "^0.5.1",
"human-format": "^0.11.0",
"lodash": "^4.17.21",
"process-top": "^1.2.0",
"progress-stream": "^2.0.0",
"promise-toolbox": "^0.19.2",
"readable-stream": "^4.4.2",
"source-map-support": "^0.5.21",
"throttle": "^1.0.3",
"vhd-lib": "^4.7.0"
}
}

View File

@@ -0,0 +1,75 @@
import { createReadStream, createWriteStream, statSync } from 'fs'
import { fromCallback } from 'promise-toolbox'
import { PassThrough, pipeline as Pipeline } from 'readable-stream'
import humanFormat from 'human-format'
import Throttle from 'throttle'
import Ref from '../_Ref.mjs'
export const createInputStream = path => {
if (path === undefined || path === '-') {
return process.stdin
}
const { size } = statSync(path)
const stream = createReadStream(path)
stream.length = size
return stream
}
export const createOutputStream = path => {
if (path !== undefined && path !== '-') {
return createWriteStream(path)
}
// introduce a through stream because stdout is not a normal stream!
const stream = new PassThrough()
stream.pipe(process.stdout)
return stream
}
const formatSizeOpts = { scale: 'binary', unit: 'B' }
const formatSize = bytes => humanFormat(bytes, formatSizeOpts)
export const formatProgress = p => {
return [
formatSize(p.transferred),
' / ',
formatSize(p.length),
' | ',
p.runtime,
's / ',
p.eta,
's | ',
formatSize(p.speed),
'/s',
].join('')
}
export const pipeline = (...streams) => {
return fromCallback(cb => {
streams = streams.filter(_ => _ != null)
streams.push(cb)
Pipeline.apply(undefined, streams)
})
}
const resolveRef = (xapi, type, refOrUuidOrNameLabel) =>
Ref.is(refOrUuidOrNameLabel)
? refOrUuidOrNameLabel
: xapi.call(`${type}.get_by_uuid`, refOrUuidOrNameLabel).catch(() =>
xapi.call(`${type}.get_by_name_label`, refOrUuidOrNameLabel).then(refs => {
if (refs.length === 1) {
return refs[0]
}
throw new Error(`no single match for ${type} with name label ${refOrUuidOrNameLabel}`)
})
)
export const resolveRecord = async (xapi, type, refOrUuidOrNameLabel) =>
xapi.getRecord(type, await resolveRef(xapi, type, refOrUuidOrNameLabel))
export { resolveRef }
export const throttle = opts => (opts != null ? new Throttle(opts) : undefined)

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More