Compare commits

..

52 Commits

Author SHA1 Message Date
Julien Fontanet
a4a879ad44 WiP 2024-01-11 09:57:42 +01:00
Julien Fontanet
beba6f7e8d chore: format with Prettier 2024-01-11 09:57:28 +01:00
Julien Fontanet
9388b5500c chore(xo-server/signin): remove empty div 2024-01-10 17:17:42 +01:00
Julien Fontanet
bae8ad25e9 feat(xo-web/tasks): hide /rrd_updates by default
After an internal discussion with @Darkbeldin and @olivierlambert.
2024-01-10 16:50:03 +01:00
Julien Fontanet
c96b29fe96 docs(troubleshooting): explicit sudo with xo-server-recover-account 2024-01-10 16:48:34 +01:00
Julien Fontanet
9888013aff feat(xo-server/rest-api): add pool action emergency_shutdown
Fixes #7277
2024-01-10 15:55:14 +01:00
Julien Fontanet
0bbb0c289d feat(xapi/pool_emergencyShutdown): new method
Related to #7277
2024-01-10 15:55:14 +01:00
Julien Fontanet
80097ea777 fix(backups/RestoreMetadataBackup): fix data path resolution
Introduced by ad46bde30

Fixes https://xcp-ng.org/forum/post/68999
2024-01-10 15:39:41 +01:00
Julien Fontanet
be452a5d63 fix(xo-web/jobs/new): reset params on method change
Fixes https://xcp-ng.org/forum/post/69299
2024-01-10 14:05:02 +01:00
Julien Fontanet
bcc0452646 feat(CODE_OF_CONDUCT): update to Contributor Covenant 2.1 2024-01-09 16:36:29 +01:00
Julien Fontanet
9d9691c5a3 fix(xen-api/setFieldEntry): avoid unnecessary MAP_DUPLICATE_KEY error
Fixes https://xcp-ng.org/forum/post/68761
2024-01-09 15:10:37 +01:00
Julien Fontanet
e56edc70d5 feat(xo-cli): 0.24.0 2024-01-09 14:29:24 +01:00
Julien Fontanet
d7f4d0f5e0 feat(xo-cli rest get): support NDJSON responses
Fixes https://xcp-ng.org/forum/post/69326
2024-01-09 14:24:48 +01:00
Julien Fontanet
8c24dd1732 fix(xapi/host_smartReboot): disable the host before fetching resident VMs
Otherwise it might leads to race condition where new VMs appear on the
host but are ignored by this method.
2024-01-08 17:11:21 +01:00
Julien Fontanet
575a423edf fix(xapi/host_smartReboot): resume VMs even if host was originally disabled
The host will always be enabled after this method anyway.
2024-01-08 17:09:53 +01:00
Julien Fontanet
e311860bb5 fix(xapi/host/waitAgentRestart): wait for enabled status 2024-01-08 17:05:30 +01:00
Julien Fontanet
e6289ebc16 docs(rest-api): update TOC 2024-01-08 16:15:32 +01:00
Julien Fontanet
013e20aa0f docs(rest-api): task monitoring 2024-01-08 16:14:20 +01:00
Julien Fontanet
45a0a83fa4 chore(CHANGELOG.unreleased): sort packages 2024-01-08 14:46:17 +01:00
Guillaume de Lafond
ae518399fa docs(configuration): useForwardedHeaders (#7289) 2024-01-08 11:35:24 +01:00
Ronan Abhamon
d949112921 fix(load-balancer): bad comparison to evaluate migration in perf plan (#7288)
Memory is compared to CPU usage to migrate VM in performance plan context.

This condition can cause unwanted migrations.
2024-01-08 11:25:40 +01:00
Manon Mercier
bb19afc45c Update backups.md (#7283)
This change follows a discussion with Marc Pezin and Yannick on Mattermost.

As Yannick pointed out, the doc refers to a remote while there is no such option in XO GUI.
2024-01-06 15:25:51 +01:00
Julien Fontanet
7780cb176a fix(backups/_MixinXapiWriter#healthCheck): add_tag → add_tags
Fixes https://xcp-ng.org/forum/post/69156

Introduced by a5acc7d26
2024-01-06 15:16:51 +01:00
Julien Fontanet
74ff64dfb4 fix(xo-server/collection/redis#_extract): properly ignore missing entries
Introduced by d8280087a

Fixes #7281
2024-01-05 13:53:46 +01:00
Julien Fontanet
9be3c40ead feat(xo-server/collection/redis#_get): return undefined if missing
Related to #7281
2024-01-05 13:52:45 +01:00
OlivierFL
0f00c7e393 fix(lite): typings errors when running yarn type-check (#7278) 2024-01-04 11:33:30 +01:00
OlivierFL
95492f6f89 fix(xo-web/menu): don't subscribe to proxies if not admin (#7249) 2024-01-04 11:05:53 +01:00
Olivier Floch
046fa7282b feat(xo-web): open github issue url with query params when clicking on bug report button 2024-01-04 11:04:27 +01:00
Olivier Floch
6cd99c39f4 feat(github): add github issue form template 2024-01-04 11:04:27 +01:00
Julien Fontanet
48c3a65cc6 fix(xo-server): VM_import() returns ref, not record
Introduced by 70b0983
2024-01-04 09:36:42 +01:00
OlivierFL
8b0b2d7c31 fix(xo-web/menu): don't subscribe to unhealthy vdi chains if not admin (#7265) 2024-01-03 18:11:34 +01:00
Julien Fontanet
d8280087a4 fix(xo-server/collection/redis#_extract): don't ignore empty records 2024-01-03 17:11:50 +01:00
Julien Fontanet
c14261a0bc fix(xo-cli): close connection on sign in error
Otherwise the CLI does not stop.
2024-01-03 17:06:29 +01:00
Julien Fontanet
3d6defca37 fix(xo-server/emergencyShutdownhost): disable host first 2024-01-03 16:26:07 +01:00
Julien Fontanet
d062a5175a chore(xo-server/emergencyShutdownhost): unnecessary var 2024-01-03 16:25:12 +01:00
Julien Fontanet
f218874c4b fix(xo-server/_createProxyVm): {this → _app}.getObject
Fixes zammad#20646

Introduced by 70b0983
2024-01-03 10:58:19 +01:00
Julien Fontanet
b1e879ca2f feat: release 5.90.0 2023-12-29 11:03:07 +01:00
Julien Fontanet
c5010c2caa feat(xo-web): 5.133.0 2023-12-29 10:48:02 +01:00
Julien Fontanet
2c40b99d8b feat(xo-web): scoped tags (#7270)
Based on #7258 developed by @fbeauchamp.

- use inline blocks to respect all paddings/margins
- main settings are in easily modifiable variables
- text color is either black or white based on the background color luminance
- make sure tags and surrounding action buttons are aligned
- always display value in black on white
- delete button use tag color if dark, otherwise black
- Tag component accept color param
2023-12-28 23:10:35 +01:00
Mathieu
0d127f2b92 fix(lite): fix changelog entry (#7269) 2023-12-28 15:56:21 +01:00
Mathieu
0464886e80 feat(lite): 0.1.7 (#7268) 2023-12-28 15:49:09 +01:00
Mathieu
d655a3e222 feat: technical release (#7266) 2023-12-27 16:07:51 +01:00
b-Nollet
579f0b91d5 feat(xo-web,xo-server): restart VM to change memory (#7244)
Fixes #7069

Add modal to restart VM to increase memory.
2023-12-26 23:46:43 +01:00
Florent BEAUCHAMP
72b1878254 fix(vhd-lib/createStreamNbd): skip original table offset before overwriting (#7264)
Introduced by fc1357db93
2023-12-26 22:29:24 +01:00
MlssFrncJrg
74dd4c8db7 feat(lite/nav): display VM count in host when menu is minimized (#7185) 2023-12-26 13:30:23 +01:00
mathieuRA
ef4ecce572 feat(xo-server/PIF): add XO tasks for PIF.reconfigureIp 2023-12-26 11:22:35 +01:00
mathieuRA
1becccffbc feat(xo-web/host/network): display and edit the IPv6 PIF field 2023-12-26 11:22:35 +01:00
mathieuRA
b95b1622b1 feat(xo-server/PIF): PIF.reconfigureIp handle IPv6 2023-12-26 11:22:35 +01:00
Manon Mercier
36d6e3779d docs: XenServer → XCP-ng/XenServer (#7255)
I would like to replace every "XenServer" I find in the doc by "XCP-ng/XenServer".

This follows an internal conversation we had with Olivier and Yann.
2023-12-26 11:21:16 +01:00
Pierre Donias
b0e000328d feat(lite): XOA quick deploy (#7245) 2023-12-22 15:58:54 +01:00
Pierre Donias
cc080ec681 feat: technical release (#7259) 2023-12-22 15:05:17 +01:00
Julien Fontanet
0d4cf48410 feat(xo-cli rest): explicit error if not registered
Fixes https://xcp-ng.org/forum/post/68698
2023-12-22 11:33:08 +01:00
116 changed files with 1930 additions and 661 deletions

View File

@@ -1,48 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: 'status: triaging :triangular_flag_on_post:, type: bug :bug:'
assignees: ''
---
1. ⚠️ **If you don't follow this template, the issue will be closed**.
2. ⚠️ **If your issue can't be easily reproduced, please report it [on the forum first](https://xcp-ng.org/forum/category/12/xen-orchestra)**.
Are you using XOA or XO from the sources?
If XOA:
- which release channel? (`stable` vs `latest`)
- please consider creating a support ticket in [your dedicated support area](https://xen-orchestra.com/#!/member/support)
If XO from the sources:
- Provide **your commit number**. If it's older than a week, we won't investigate
- Don't forget to [read this first](https://xen-orchestra.com/docs/community.html)
- As well as follow [this guide](https://xen-orchestra.com/docs/community.html#report-a-bug)
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please provide the following information):**
- Node: [e.g. 16.12.1]
- hypervisor: [e.g. XCP-ng 8.2.0]
**Additional context**
Add any other context about the problem here.

119
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,119 @@
name: Bug Report
description: Create a report to help us improve
labels: ['type: bug :bug:', 'status: triaging :triangular_flag_on_post:']
body:
- type: markdown
attributes:
value: |
1. ⚠️ **If you don't follow this template, the issue will be closed**.
2. ⚠️ **If your issue can't be easily reproduced, please report it [on the forum first](https://xcp-ng.org/forum/category/12/xen-orchestra)**.
- type: markdown
attributes:
value: '## Are you using XOA or XO from the sources?'
- type: dropdown
id: xo-origin
attributes:
label: Are you using XOA or XO from the sources?
options:
- XOA
- XO from the sources
- both
validations:
required: false
- type: markdown
attributes:
value: '### If XOA:'
- type: dropdown
id: xoa-channel
attributes:
label: Which release channel?
description: please consider creating a support ticket in [your dedicated support area](https://xen-orchestra.com/#!/member/support)
options:
- stable
- latest
- both
validations:
required: false
- type: markdown
attributes:
value: '### If XO from the sources:'
- type: markdown
attributes:
value: |
- Don't forget to [read this first](https://xen-orchestra.com/docs/community.html)
- As well as follow [this guide](https://xen-orchestra.com/docs/community.html#report-a-bug)
- type: input
id: xo-sources-commit-number
attributes:
label: Provide your commit number
description: If it's older than a week, we won't investigate
placeholder: e.g. 579f0
validations:
required: false
- type: markdown
attributes:
value: '## Bug description:'
- type: textarea
id: bug-description
attributes:
label: Describe the bug
description: A clear and concise description of what the bug is
validations:
required: true
- type: textarea
id: error-message
attributes:
label: Error message
render: Markdown
validations:
required: false
- type: textarea
id: steps
attributes:
label: To reproduce
description: 'Steps to reproduce the behavior:'
value: |
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error
validations:
required: false
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen
validations:
required: false
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
validations:
required: false
- type: markdown
attributes:
value: '## Environment (please provide the following information):'
- type: input
id: node-version
attributes:
label: Node
placeholder: e.g. 16.12.1
validations:
required: true
- type: input
id: hypervisor-version
attributes:
label: Hypervisor
placeholder: e.g. XCP-ng 8.2.0
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here
validations:
required: false

View File

@@ -22,7 +22,7 @@
"fuse-native": "^2.2.6",
"lru-cache": "^7.14.0",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.8.0"
"vhd-lib": "^4.9.0"
},
"scripts": {
"postversion": "npm publish --access public"

View File

@@ -41,9 +41,7 @@ export default class MultiNbdClient {
}
if (connectedClients.length < this.#clients.length) {
warn(
`incomplete connection by multi Nbd, only ${connectedClients.length} over ${
this.#clients.length
} expected clients`
`incomplete connection by multi Nbd, only ${connectedClients.length} over ${this.#clients.length} expected clients`
)
this.#clients = connectedClients
}

View File

@@ -1,4 +1,4 @@
import { Task } from './Task.mjs'
import { Task } from '@vates/task'
export class HealthCheckVmBackup {
#restoredVm
@@ -14,7 +14,7 @@ export class HealthCheckVmBackup {
async run() {
return Task.run(
{
name: 'vmstart',
properties: { name: 'vmstart' },
},
async () => {
let restoredVm = this.#restoredVm

View File

@@ -1,8 +1,8 @@
import assert from 'node:assert'
import { Task } from '@vates/task'
import { formatFilenameDate } from './_filenameDate.mjs'
import { importIncrementalVm } from './_incrementalVm.mjs'
import { Task } from './Task.mjs'
import { watchStreamSize } from './_watchStreamSize.mjs'
import { VhdNegative, VhdSynthetic } from 'vhd-lib'
import { decorateClass } from '@vates/decorate-with'
@@ -191,7 +191,7 @@ export class ImportVmBackup {
async #decorateIncrementalVmMetadata() {
const { additionnalVmTag, mapVdisSrs, useDifferentialRestore } = this._importIncrementalVmSettings
const ignoredVdis = new Set(
Object.entries(mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
@@ -240,7 +240,7 @@ export class ImportVmBackup {
return Task.run(
{
name: 'transfer',
properties: { name: 'transfer' },
},
async () => {
const xapi = this._xapi

View File

@@ -21,7 +21,7 @@ export class RestoreMetadataBackup {
})
} else {
const metadata = JSON.parse(await handler.readFile(join(backupId, 'metadata.json')))
const dataFileName = resolve(backupId, metadata.data ?? 'data.json')
const dataFileName = resolve('/', backupId, metadata.data ?? 'data.json').slice(1)
const data = await handler.readFile(dataFileName)
// if data is JSON, sent it as a plain string, otherwise, consider the data as binary and encode it

View File

@@ -1,155 +0,0 @@
import CancelToken from 'promise-toolbox/CancelToken'
import Zone from 'node-zone'
const logAfterEnd = log => {
const error = new Error('task has already ended')
error.log = log
throw error
}
const noop = Function.prototype
const serializeErrors = errors => (Array.isArray(errors) ? errors.map(serializeError) : errors)
// Create a serializable object from an error.
//
// Otherwise some fields might be non-enumerable and missing from logs.
const serializeError = error =>
error instanceof Error
? {
...error, // Copy enumerable properties.
code: error.code,
errors: serializeErrors(error.errors), // supports AggregateError
message: error.message,
name: error.name,
stack: error.stack,
}
: error
const $$task = Symbol('@xen-orchestra/backups/Task')
export class Task {
static get cancelToken() {
const task = Zone.current.data[$$task]
return task !== undefined ? task.#cancelToken : CancelToken.none
}
static run(opts, fn) {
return new this(opts).run(fn, true)
}
static wrapFn(opts, fn) {
// compatibility with @decorateWith
if (typeof fn !== 'function') {
;[fn, opts] = [opts, fn]
}
return function () {
return Task.run(typeof opts === 'function' ? opts.apply(this, arguments) : opts, () => fn.apply(this, arguments))
}
}
#cancelToken
#id = Math.random().toString(36).slice(2)
#onLog
#zone
constructor({ name, data, onLog }) {
let parentCancelToken, parentId
if (onLog === undefined) {
const parent = Zone.current.data[$$task]
if (parent === undefined) {
onLog = noop
} else {
onLog = log => parent.#onLog(log)
parentCancelToken = parent.#cancelToken
parentId = parent.#id
}
}
const zone = Zone.current.fork('@xen-orchestra/backups/Task')
zone.data[$$task] = this
this.#zone = zone
const { cancel, token } = CancelToken.source(parentCancelToken && [parentCancelToken])
this.#cancelToken = token
this.cancel = cancel
this.#onLog = onLog
this.#log('start', {
data,
message: name,
parentId,
})
}
failure(error) {
this.#end('failure', serializeError(error))
}
info(message, data) {
this.#log('info', { data, message })
}
/**
* Run a function in the context of this task
*
* In case of error, the task will be failed.
*
* @typedef Result
* @param {() => Result} fn
* @param {boolean} last - Whether the task should succeed if there is no error
* @returns Result
*/
run(fn, last = false) {
return this.#zone.run(() => {
try {
const result = fn()
let then
if (result != null && typeof (then = result.then) === 'function') {
then.call(result, last && (value => this.success(value)), error => this.failure(error))
} else if (last) {
this.success(result)
}
return result
} catch (error) {
this.failure(error)
throw error
}
})
}
success(value) {
this.#end('success', value)
}
warning(message, data) {
this.#log('warning', { data, message })
}
wrapFn(fn, last) {
const task = this
return function () {
return task.run(() => fn.apply(this, arguments), last)
}
}
#end(status, result) {
this.#log('end', { result, status })
this.#onLog = logAfterEnd
}
#log(event, props) {
this.#onLog({
...props,
event,
taskId: this.#id,
timestamp: Date.now(),
})
}
}
for (const method of ['info', 'warning']) {
Task[method] = (...args) => Zone.current.data[$$task]?.[method](...args)
}

View File

@@ -11,10 +11,10 @@ import { decorateMethodsWith } from '@vates/decorate-with'
import { deduped } from '@vates/disposable/deduped.js'
import { getHandler } from '@xen-orchestra/fs'
import { parseDuration } from '@vates/parse-duration'
import { Task } from '@vates/task'
import { Xapi } from '@xen-orchestra/xapi'
import { RemoteAdapter } from './RemoteAdapter.mjs'
import { Task } from './Task.mjs'
createCachedLookup().patchGlobal()
@@ -154,8 +154,8 @@ process.on('message', async message => {
const result = message.runWithLogs
? await Task.run(
{
name: 'backup run',
onLog: data =>
properties: { name: 'backup run' },
onProgress: data =>
emitMessage({
data,
type: 'log',

View File

@@ -36,32 +36,34 @@ const computeVhdsSize = (handler, vhdPaths) =>
)
// chain is [ ancestor, child_1, ..., child_n ]
async function _mergeVhdChain(handler, chain, { logInfo, remove, mergeBlockConcurrency }) {
logInfo(`merging VHD chain`, { chain })
async function _mergeVhdChain(handler, chain, { logInfo, remove, merge, mergeBlockConcurrency }) {
if (merge) {
logInfo(`merging VHD chain`, { chain })
let done, total
const handle = setInterval(() => {
if (done !== undefined) {
logInfo('merge in progress', {
done,
parent: chain[0],
progress: Math.round((100 * done) / total),
total,
let done, total
const handle = setInterval(() => {
if (done !== undefined) {
logInfo('merge in progress', {
done,
parent: chain[0],
progress: Math.round((100 * done) / total),
total,
})
}
}, 10e3)
try {
return await mergeVhdChain(handler, chain, {
logInfo,
mergeBlockConcurrency,
onProgress({ done: d, total: t }) {
done = d
total = t
},
removeUnused: remove,
})
} finally {
clearInterval(handle)
}
}, 10e3)
try {
return await mergeVhdChain(handler, chain, {
logInfo,
mergeBlockConcurrency,
onProgress({ done: d, total: t }) {
done = d
total = t
},
removeUnused: remove,
})
} finally {
clearInterval(handle)
}
}
@@ -469,20 +471,23 @@ export async function cleanVm(
const metadataWithMergedVhd = {}
const doMerge = async () => {
await asyncMap(toMerge, async chain => {
const { finalVhdSize } = await limitedMergeVhdChain(handler, chain, {
const merged = await limitedMergeVhdChain(handler, chain, {
logInfo,
logWarn,
remove,
merge,
mergeBlockConcurrency,
})
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = (metadataWithMergedVhd[metadataPath] ?? 0) + finalVhdSize
if (merged !== undefined) {
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
metadataWithMergedVhd[metadataPath] = true
}
})
}
await Promise.all([
...unusedVhdsDeletion,
toMerge.length !== 0 && (merge ? Task.run({ name: 'merge' }, doMerge) : () => Promise.resolve()),
toMerge.length !== 0 && (merge ? Task.run({ properties: { name: 'merge' } }, doMerge) : () => Promise.resolve()),
asyncMap(unusedXvas, path => {
logWarn('unused XVA', { path })
if (remove) {
@@ -504,11 +509,12 @@ export async function cleanVm(
// update size for delta metadata with merged VHD
// check for the other that the size is the same as the real file size
await asyncMap(jsons, async metadataPath => {
const metadata = backups.get(metadataPath)
let fileSystemSize
const mergedSize = metadataWithMergedVhd[metadataPath]
const merged = metadataWithMergedVhd[metadataPath] !== undefined
const { mode, size, vhds, xva } = metadata
@@ -518,29 +524,26 @@ export async function cleanVm(
const linkedXva = resolve('/', vmDir, xva)
try {
fileSystemSize = await handler.getSize(linkedXva)
if (fileSystemSize !== size && fileSystemSize !== undefined) {
logWarn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
}
} catch (error) {
// can fail with encrypted remote
}
} else if (mode === 'delta') {
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
// the size is not computed in some cases (e.g. VhdDirectory)
if (fileSystemSize === undefined) {
return
}
// don't warn if the size has changed after a merge
if (mergedSize === undefined) {
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
// the size is not computed in some cases (e.g. VhdDirectory)
if (fileSystemSize !== undefined && fileSystemSize !== size) {
logWarn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
}
if (!merged && fileSystemSize !== size) {
// FIXME: figure out why it occurs so often and, once fixed, log the real problems with `logWarn`
console.warn('cleanVm: incorrect backup size in metadata', {
path: metadataPath,
actual: size ?? 'none',
expected: fileSystemSize,
})
}
}
} catch (error) {
@@ -548,19 +551,9 @@ export async function cleanVm(
return
}
// systematically update size and differentials after a merge
// @todo : after 2024-04-01 remove the fixmetadata options since the size computation is fixed
if (mergedSize || (fixMetadata && fileSystemSize !== size)) {
metadata.size = mergedSize ?? fileSystemSize ?? size
if (mergedSize) {
// all disks are now key disk
metadata.isVhdDifferencing = {}
for (const id of Object.values(metadata.vdis ?? {})) {
metadata.isVhdDifferencing[`${id}.vhd`] = false
}
}
// systematically update size after a merge
if ((merged || fixMetadata) && size !== fileSystemSize) {
metadata.size = fileSystemSize
mustRegenerateCache = true
try {
await handler.writeFile(metadataPath, JSON.stringify(metadata), { flags: 'w' })

View File

@@ -6,9 +6,9 @@ import { CancelToken } from 'promise-toolbox'
import { compareVersions } from 'compare-versions'
import { createVhdStreamWithLength } from 'vhd-lib'
import { defer } from 'golike-defer'
import { Task } from '@vates/task'
import { cancelableMap } from './_cancelableMap.mjs'
import { Task } from './Task.mjs'
import pick from 'lodash/pick.js'
// in `other_config` of an incrementally replicated VM, contains the UUID of the source VM

View File

@@ -1,4 +1,5 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { Task } from '@vates/task'
import Disposable from 'promise-toolbox/Disposable'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
@@ -6,7 +7,6 @@ import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { PoolMetadataBackup } from './_PoolMetadataBackup.mjs'
import { XoMetadataBackup } from './_XoMetadataBackup.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
const DEFAULT_METADATA_SETTINGS = {
@@ -14,6 +14,8 @@ const DEFAULT_METADATA_SETTINGS = {
retentionXoMetadata: 0,
}
const noop = Function.prototype
export const Metadata = class MetadataBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
@@ -55,13 +57,16 @@ export const Metadata = class MetadataBackupRunner extends Abstract {
poolIds.map(id =>
this._getRecord('pool', id).catch(error => {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
new Task(
{
name: 'get pool record',
data: { type: 'pool', id },
properties: {
id,
name: 'get pool record',
type: 'pool',
},
},
() => Promise.reject(error)
)
).catch(noop)
})
)
),
@@ -81,11 +86,11 @@ export const Metadata = class MetadataBackupRunner extends Abstract {
if (pools.length !== 0 && settings.retentionPoolMetadata !== 0) {
promises.push(
asyncMap(pools, async pool =>
runTask(
new Task(
{
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
data: {
properties: {
id: pool.$id,
name: `Starting metadata backup for the pool (${pool.$id}). (${job.id})`,
pool,
poolMaster: await ignoreErrors.call(pool.$xapi.getRecord('host', pool.master)),
type: 'pool',
@@ -100,17 +105,17 @@ export const Metadata = class MetadataBackupRunner extends Abstract {
schedule,
settings,
}).run()
)
).catch(noop)
)
)
}
if (job.xoMetadata !== undefined && settings.retentionXoMetadata !== 0) {
promises.push(
runTask(
new Task(
{
name: `Starting XO metadata backup. (${job.id})`,
data: {
properties: {
name: `Starting XO metadata backup. (${job.id})`,
type: 'xo',
},
},
@@ -122,7 +127,7 @@ export const Metadata = class MetadataBackupRunner extends Abstract {
schedule,
settings,
}).run()
)
).catch(noop)
)
}
await Promise.all(promises)

View File

@@ -1,12 +1,11 @@
import { asyncMapSettled } from '@xen-orchestra/async-map'
import Disposable from 'promise-toolbox/Disposable'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { Task } from '@vates/task'
import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { Task } from '../Task.mjs'
import createStreamThrottle from './_createStreamThrottle.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
import { FullRemote } from './_vmRunners/FullRemote.mjs'
import { IncrementalRemote } from './_vmRunners/IncrementalRemote.mjs'
@@ -25,6 +24,8 @@ const DEFAULT_REMOTE_VM_SETTINGS = {
vmTimeout: 0,
}
const noop = Function.prototype
export const VmsRemote = class RemoteVmsBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
@@ -63,7 +64,13 @@ export const VmsRemote = class RemoteVmsBackupRunner extends Abstract {
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
const taskStart = {
properties: {
id: vmUuid,
name: 'backup VM',
type: 'VM',
},
}
const opts = {
baseSettings,
@@ -86,7 +93,7 @@ export const VmsRemote = class RemoteVmsBackupRunner extends Abstract {
throw new Error(`Job mode ${job.mode} not implemented for mirror backup`)
}
return runTask(taskStart, () => vmBackup.run())
return new Task(taskStart, () => vmBackup.run()).catch(noop)
}
const { concurrency } = settings
await asyncMapSettled(vmsUuids, !concurrency ? handleVm : limitConcurrency(concurrency)(handleVm))

View File

@@ -1,12 +1,11 @@
import { asyncMapSettled } from '@xen-orchestra/async-map'
import Disposable from 'promise-toolbox/Disposable'
import { limitConcurrency } from 'limit-concurrency-decorator'
import { Task } from '@vates/task'
import { extractIdsFromSimplePattern } from '../extractIdsFromSimplePattern.mjs'
import { Task } from '../Task.mjs'
import createStreamThrottle from './_createStreamThrottle.mjs'
import { DEFAULT_SETTINGS, Abstract } from './_Abstract.mjs'
import { runTask } from './_runTask.mjs'
import { getAdaptersByRemote } from './_getAdaptersByRemote.mjs'
import { IncrementalXapi } from './_vmRunners/IncrementalXapi.mjs'
import { FullXapi } from './_vmRunners/FullXapi.mjs'
@@ -34,6 +33,8 @@ const DEFAULT_XAPI_VM_SETTINGS = {
vmTimeout: 0,
}
const noop = Function.prototype
export const VmsXapi = class VmsXapiBackupRunner extends Abstract {
_computeBaseSettings(config, job) {
const baseSettings = { ...DEFAULT_SETTINGS }
@@ -57,13 +58,16 @@ export const VmsXapi = class VmsXapiBackupRunner extends Abstract {
Disposable.all(
extractIdsFromSimplePattern(job.srs).map(id =>
this._getRecord('SR', id).catch(error => {
runTask(
new Task(
{
name: 'get SR record',
data: { type: 'SR', id },
properties: {
id,
name: 'get SR record',
type: 'SR',
},
},
() => Promise.reject(error)
)
).catch(noop)
})
)
),
@@ -90,13 +94,19 @@ export const VmsXapi = class VmsXapiBackupRunner extends Abstract {
const baseSettings = this._baseSettings
const handleVm = vmUuid => {
const taskStart = { name: 'backup VM', data: { type: 'VM', id: vmUuid } }
const taskStart = {
properties: {
id: vmUuid,
name: 'backup VM',
type: 'VM',
},
}
return this._getRecord('VM', vmUuid).then(
disposableVm =>
Disposable.use(disposableVm, vm => {
taskStart.data.name_label = vm.name_label
return runTask(taskStart, () => {
return new Task()(taskStart, () => {
const opts = {
baseSettings,
config,
@@ -121,12 +131,12 @@ export const VmsXapi = class VmsXapiBackupRunner extends Abstract {
}
}
return vmBackup.run()
})
}).catch(noop)
}),
error =>
runTask(taskStart, () => {
new Task(taskStart, () => {
throw error
})
}).catch(noop)
)
}
const { concurrency } = settings

View File

@@ -1,9 +1,12 @@
import Disposable from 'promise-toolbox/Disposable'
import pTimeout from 'promise-toolbox/timeout'
import { compileTemplate } from '@xen-orchestra/template'
import { runTask } from './_runTask.mjs'
import { Task } from '@vates/task'
import { RemoteTimeoutError } from './_RemoteTimeoutError.mjs'
const noop = Function.prototype
export const DEFAULT_SETTINGS = {
getRemoteTimeout: 300e3,
reportWhen: 'failure',
@@ -36,13 +39,16 @@ export const Abstract = class AbstractRunner {
})
} catch (error) {
// See https://github.com/vatesfr/xen-orchestra/commit/6aa6cfba8ec939c0288f0fa740f6dfad98c43cbb
runTask(
Task.run(
{
name: 'get remote adapter',
data: { type: 'remote', id: remoteId },
properties: {
id: remoteId,
name: 'get remote adapter',
type: 'remote',
},
},
() => Promise.reject(error)
)
).catch(noop)
}
}
}

View File

@@ -1,9 +1,9 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { Task } from '@vates/task'
import { DIR_XO_POOL_METADATA_BACKUPS } from '../RemoteAdapter.mjs'
import { forkStreamUnpipe } from './_forkStreamUnpipe.mjs'
import { formatFilenameDate } from '../_filenameDate.mjs'
import { Task } from '../Task.mjs'
export const PATH_DB_DUMP = '/pool/xmldbdump'
@@ -54,8 +54,8 @@ export class PoolMetadataBackup {
([remoteId, adapter]) =>
Task.run(
{
name: `Starting metadata backup for the pool (${pool.$id}) for the remote (${remoteId}). (${job.id})`,
data: {
properties: {
name: `Starting metadata backup for the pool (${pool.$id}) for the remote (${remoteId}). (${job.id})`,
id: remoteId,
type: 'remote',
},

View File

@@ -1,9 +1,9 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { join } from '@xen-orchestra/fs/path'
import { Task } from '@vates/task'
import { DIR_XO_CONFIG_BACKUPS } from '../RemoteAdapter.mjs'
import { formatFilenameDate } from '../_filenameDate.mjs'
import { Task } from '../Task.mjs'
export class XoMetadataBackup {
constructor({ config, job, remoteAdapters, schedule, settings }) {
@@ -51,8 +51,8 @@ export class XoMetadataBackup {
([remoteId, adapter]) =>
Task.run(
{
name: `Starting XO metadata backup for the remote (${remoteId}). (${job.id})`,
data: {
properties: {
name: `Starting XO metadata backup for the remote (${remoteId}). (${job.id})`,
id: remoteId,
type: 'remote',
},

View File

@@ -1,5 +0,0 @@
import { Task } from '../Task.mjs'
const noop = Function.prototype
export const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs

View File

@@ -1,10 +1,11 @@
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { Task } from '@vates/task'
import { AbstractRemote } from './_AbstractRemote.mjs'
import { FullRemoteWriter } from '../_writers/FullRemoteWriter.mjs'
import { forkStreamUnpipe } from '../_forkStreamUnpipe.mjs'
import { watchStreamSize } from '../../_watchStreamSize.mjs'
import { Task } from '../../Task.mjs'
export const FullRemote = class FullRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {

View File

@@ -1,6 +1,7 @@
import { asyncEach } from '@vates/async-each'
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { Task } from '@vates/task'
import assert from 'node:assert'
import isVhdDifferencingDisk from 'vhd-lib/isVhdDifferencingDisk.js'
import mapValues from 'lodash/mapValues.js'
@@ -8,7 +9,6 @@ import mapValues from 'lodash/mapValues.js'
import { AbstractRemote } from './_AbstractRemote.mjs'
import { forkDeltaExport } from './_forkDeltaExport.mjs'
import { IncrementalRemoteWriter } from '../_writers/IncrementalRemoteWriter.mjs'
import { Task } from '../../Task.mjs'
class IncrementalRemoteVmBackupRunner extends AbstractRemote {
_getRemoteWriter() {

View File

@@ -2,6 +2,7 @@ import { asyncEach } from '@vates/async-each'
import { asyncMap } from '@xen-orchestra/async-map'
import { createLogger } from '@xen-orchestra/log'
import { pipeline } from 'node:stream'
import { Task } from '@vates/task'
import findLast from 'lodash/findLast.js'
import isVhdDifferencingDisk from 'vhd-lib/isVhdDifferencingDisk.js'
import keyBy from 'lodash/keyBy.js'
@@ -13,7 +14,6 @@ import { exportIncrementalVm } from '../../_incrementalVm.mjs'
import { forkDeltaExport } from './_forkDeltaExport.mjs'
import { IncrementalRemoteWriter } from '../_writers/IncrementalRemoteWriter.mjs'
import { IncrementalXapiWriter } from '../_writers/IncrementalXapiWriter.mjs'
import { Task } from '../../Task.mjs'
import { watchStreamSize } from '../../_watchStreamSize.mjs'
const { debug } = createLogger('xo:backups:IncrementalXapiVmBackup')

View File

@@ -1,6 +1,6 @@
import { asyncMap } from '@xen-orchestra/async-map'
import { createLogger } from '@xen-orchestra/log'
import { Task } from '../../Task.mjs'
import { Task } from '@vates/task'
const { debug, warn } = createLogger('xo:backups:AbstractVmRunner')
@@ -80,7 +80,7 @@ export const Abstract = class AbstractVmBackupRunner {
// create a task to have an info in the logs and reports
return Task.run(
{
name: 'health check',
properties: { name: 'health check' },
},
() => {
Task.info(`This VM doesn't match the health check's tags for this schedule`)

View File

@@ -5,9 +5,9 @@ import { asyncMap } from '@xen-orchestra/async-map'
import { decorateMethodsWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { formatDateTime } from '@xen-orchestra/xapi'
import { Task } from '@vates/task'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
import { Abstract } from './_Abstract.mjs'
export const AbstractXapi = class AbstractXapiVmBackupRunner extends Abstract {
@@ -142,7 +142,7 @@ export const AbstractXapi = class AbstractXapiVmBackupRunner extends Abstract {
const settings = this._settings
if (this._mustDoSnapshot()) {
await Task.run({ name: 'snapshot' }, async () => {
await Task.run({ properties: { name: 'snapshot' } }, async () => {
if (!settings.bypassVdiChainsCheck) {
await vm.$assertHealthyVdiChains()
}

View File

@@ -1,6 +1,7 @@
import { Task } from '@vates/task'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
import { MixinRemoteWriter } from './_MixinRemoteWriter.mjs'
import { AbstractFullWriter } from './_AbstractFullWriter.mjs'
@@ -9,10 +10,10 @@ export class FullRemoteWriter extends MixinRemoteWriter(AbstractFullWriter) {
constructor(props) {
super(props)
this.run = Task.wrapFn(
this.run = Task.wrap(
{
name: 'export',
data: {
properties: {
name: 'export',
id: props.remoteId,
type: 'remote',
@@ -63,7 +64,7 @@ export class FullRemoteWriter extends MixinRemoteWriter(AbstractFullWriter) {
await deleteOldBackups()
}
await Task.run({ name: 'transfer' }, async () => {
await Task.run({ properties: { name: 'transfer' } }, async () => {
await adapter.outputStream(dataFilename, stream, {
maxStreamLength,
streamLength,

View File

@@ -1,10 +1,10 @@
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { asyncMap, asyncMapSettled } from '@xen-orchestra/async-map'
import { formatDateTime } from '@xen-orchestra/xapi'
import { Task } from '@vates/task'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { Task } from '../../Task.mjs'
import { AbstractFullWriter } from './_AbstractFullWriter.mjs'
import { MixinXapiWriter } from './_MixinXapiWriter.mjs'
@@ -14,10 +14,10 @@ export class FullXapiWriter extends MixinXapiWriter(AbstractFullWriter) {
constructor(props) {
super(props)
this.run = Task.wrapFn(
this.run = Task.wrap(
{
name: 'export',
data: {
properties: {
name: 'export',
id: props.sr.uuid,
name_label: this._sr.name_label,
type: 'SR',
@@ -52,7 +52,7 @@ export class FullXapiWriter extends MixinXapiWriter(AbstractFullWriter) {
}
let targetVmRef
await Task.run({ name: 'transfer' }, async () => {
await Task.run({ properties: { name: 'transfer' } }, async () => {
targetVmRef = await xapi.VM_import(stream, sr.$ref, vm =>
Promise.all([
!_warmMigration && vm.add_tags('Disaster Recovery'),

View File

@@ -8,11 +8,11 @@ import { createLogger } from '@xen-orchestra/log'
import { decorateClass } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { dirname } from 'node:path'
import { Task } from '@vates/task'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { TAG_BASE_DELTA } from '../../_incrementalVm.mjs'
import { Task } from '../../Task.mjs'
import { MixinRemoteWriter } from './_MixinRemoteWriter.mjs'
import { AbstractIncrementalWriter } from './_AbstractIncrementalWriter.mjs'
@@ -71,17 +71,17 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
prepare({ isFull }) {
// create the task related to this export and ensure all methods are called in this context
const task = new Task({
name: 'export',
data: {
properties: {
name: 'export',
id: this._remoteId,
isFull,
type: 'remote',
},
})
this.transfer = task.wrapFn(this.transfer)
this.healthCheck = task.wrapFn(this.healthCheck)
this.cleanup = task.wrapFn(this.cleanup)
this.afterBackup = task.wrapFn(this.afterBackup, true)
this.transfer = task.wrapInside(this.transfer)
this.healthCheck = task.wrapInside(this.healthCheck)
this.cleanup = task.wrapInside(this.cleanup)
this.afterBackup = task.wrap(this.afterBackup)
return task.run(() => this._prepare())
}
@@ -174,7 +174,7 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
vm,
vmSnapshot,
}
const { size } = await Task.run({ name: 'transfer' }, async () => {
const { size } = await Task.run({ properties: { name: 'transfer' } }, async () => {
let transferSize = 0
await asyncEach(
Object.entries(deltaExport.vdis),
@@ -205,7 +205,7 @@ export class IncrementalRemoteWriter extends MixinRemoteWriter(AbstractIncrement
// TODO remove when this has been done before the export
await checkVhd(handler, parentPath)
}
// don't write it as transferSize += await async function
// since i += await asyncFun lead to race condition
// as explained : https://eslint.org/docs/latest/rules/require-atomic-updates

View File

@@ -1,11 +1,11 @@
import { asyncMap, asyncMapSettled } from '@xen-orchestra/async-map'
import ignoreErrors from 'promise-toolbox/ignoreErrors'
import { formatDateTime } from '@xen-orchestra/xapi'
import { Task } from '@vates/task'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getOldEntries } from '../../_getOldEntries.mjs'
import { importIncrementalVm, TAG_BACKUP_SR, TAG_BASE_DELTA, TAG_COPY_SRC } from '../../_incrementalVm.mjs'
import { Task } from '../../Task.mjs'
import { AbstractIncrementalWriter } from './_AbstractIncrementalWriter.mjs'
import { MixinXapiWriter } from './_MixinXapiWriter.mjs'
@@ -40,18 +40,21 @@ export class IncrementalXapiWriter extends MixinXapiWriter(AbstractIncrementalWr
prepare({ isFull }) {
// create the task related to this export and ensure all methods are called in this context
const task = new Task({
name: 'export',
data: {
properties: {
name: 'export',
id: this._sr.uuid,
isFull,
name_label: this._sr.name_label,
type: 'SR',
},
})
const hasHealthCheckSr = this._healthCheckSr !== undefined
this.transfer = task.wrapFn(this.transfer)
this.cleanup = task.wrapFn(this.cleanup, !hasHealthCheckSr)
this.healthCheck = task.wrapFn(this.healthCheck, hasHealthCheckSr)
this.transfer = task.wrapInside(this.transfer)
if (this._healthCheckSr !== undefined) {
this.cleanup = task.wrapInside(this.cleanup)
this.healthCheck = task.wrap(this.healthCheck)
} else {
this.cleanup = task.wrap(this.cleanup)
}
return task.run(() => this._prepare())
}
@@ -139,7 +142,7 @@ export class IncrementalXapiWriter extends MixinXapiWriter(AbstractIncrementalWr
const { uuid: srUuid, $xapi: xapi } = sr
let targetVmRef
await Task.run({ name: 'transfer' }, async () => {
await Task.run({ properties: { name: 'transfer' } }, async () => {
targetVmRef = await importIncrementalVm(this.#decorateVmMetadata(deltaExport), sr)
return {
size: Object.values(sizeContainers).reduce((sum, { size }) => sum + size, 0),

View File

@@ -1,12 +1,12 @@
import { createLogger } from '@xen-orchestra/log'
import { join } from 'node:path'
import { Task } from '@vates/task'
import assert from 'node:assert'
import { formatFilenameDate } from '../../_filenameDate.mjs'
import { getVmBackupDir } from '../../_getVmBackupDir.mjs'
import { HealthCheckVmBackup } from '../../HealthCheckVmBackup.mjs'
import { ImportVmBackup } from '../../ImportVmBackup.mjs'
import { Task } from '../../Task.mjs'
import * as MergeWorker from '../../merge-worker/index.mjs'
const { info, warn } = createLogger('xo:backups:MixinBackupWriter')
@@ -26,7 +26,7 @@ export const MixinRemoteWriter = (BaseClass = Object) =>
async _cleanVm(options) {
try {
return await Task.run({ name: 'clean-vm' }, () => {
return await Task.run({ properties: { name: 'clean-vm' } }, () => {
return this._adapter.cleanVm(this._vmBackupDir, {
...options,
fixMetadata: true,
@@ -84,7 +84,7 @@ export const MixinRemoteWriter = (BaseClass = Object) =>
)
return Task.run(
{
name: 'health check',
properties: { name: 'health check' },
},
async () => {
const xapi = sr.$xapi

View File

@@ -1,8 +1,8 @@
import { extractOpaqueRef } from '@xen-orchestra/xapi'
import { Task } from '@vates/task'
import assert from 'node:assert/strict'
import { HealthCheckVmBackup } from '../../HealthCheckVmBackup.mjs'
import { Task } from '../../Task.mjs'
export const MixinXapiWriter = (BaseClass = Object) =>
class MixinXapiWriter extends BaseClass {
@@ -32,7 +32,7 @@ export const MixinXapiWriter = (BaseClass = Object) =>
// copy VM
return Task.run(
{
name: 'health check',
properties: { name: 'health check' },
},
async () => {
const { $xapi: xapi } = sr
@@ -42,7 +42,7 @@ export const MixinXapiWriter = (BaseClass = Object) =>
if (await this.#isAlreadyOnHealthCheckSr(baseVm)) {
healthCheckVmRef = await Task.run(
{ name: 'cloning-vm' },
{ properties: { name: 'cloning-vm' } },
async () =>
await xapi
.callAsync('VM.clone', this._targetVmRef, `Health Check - ${baseVm.name_label}`)
@@ -50,7 +50,7 @@ export const MixinXapiWriter = (BaseClass = Object) =>
)
} else {
healthCheckVmRef = await Task.run(
{ name: 'copying-vm' },
{ properties: { name: 'copying-vm' } },
async () =>
await xapi
.callAsync('VM.copy', this._targetVmRef, `Health Check - ${baseVm.name_label}`, sr.$ref)
@@ -58,7 +58,7 @@ export const MixinXapiWriter = (BaseClass = Object) =>
)
}
const healthCheckVm = xapi.getObject(healthCheckVmRef) ?? (await xapi.waitObject(healthCheckVmRef))
await healthCheckVm.add_tag('xo:no-bak=Health Check')
await healthCheckVm.add_tags('xo:no-bak=Health Check')
await new HealthCheckVmBackup({
restoredVm: healthCheckVm,
xapi,

View File

@@ -27,6 +27,7 @@
"@vates/fuse-vhd": "^2.0.0",
"@vates/nbd-client": "^3.0.0",
"@vates/parse-duration": "^0.1.1",
"@vates/task": "^0.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^4.1.3",
"@xen-orchestra/log": "^0.6.0",
@@ -44,7 +45,7 @@
"proper-lockfile": "^4.1.2",
"tar": "^6.1.15",
"uuid": "^9.0.0",
"vhd-lib": "^4.8.0",
"vhd-lib": "^4.9.0",
"xen-api": "^2.0.0",
"yazl": "^2.5.1"
},

View File

@@ -2,12 +2,18 @@
## **next**
- Fix Typescript typings errors when running `yarn type-check` command (PR [#7278](https://github.com/vatesfr/xen-orchestra/pull/7278))
## **0.1.7** (2023-12-28)
- [VM/Action] Ability to migrate a VM from its view (PR [#7164](https://github.com/vatesfr/xen-orchestra/pull/7164))
- Ability to override host address with `master` URL query param (PR [#7187](https://github.com/vatesfr/xen-orchestra/pull/7187))
- Added tooltip on CPU provisioning warning icon (PR [#7223](https://github.com/vatesfr/xen-orchestra/pull/7223))
- Add indeterminate state on FormToggle component (PR [#7230](https://github.com/vatesfr/xen-orchestra/pull/7230))
- Add new UiStatusPanel component (PR [#7227](https://github.com/vatesfr/xen-orchestra/pull/7227))
- XOA quick deploy (PR [#7245](https://github.com/vatesfr/xen-orchestra/pull/7245))
- Fix infinite loader when no stats on pool dashboard (PR [#7236](https://github.com/vatesfr/xen-orchestra/pull/7236))
- [Tree view] Display VMs count (PR [#7185](https://github.com/vatesfr/xen-orchestra/pull/7185))
## **0.1.6** (2023-11-30)

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/lite",
"version": "0.1.6",
"version": "0.1.7",
"scripts": {
"dev": "GIT_HEAD=$(git rev-parse HEAD) vite",
"build": "run-p type-check build-only",

View File

@@ -21,7 +21,8 @@ a {
}
code,
code * {
code *,
pre {
font-family: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono",
"Courier New", monospace;
}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 43 KiB

View File

@@ -13,6 +13,9 @@
<slot />
<div class="right">
<PoolOverrideWarning as-tooltip />
<UiButton v-if="isDesktop" :icon="faDownload" @click="openXoaDeploy">
{{ $t("deploy-xoa") }}
</UiButton>
<AccountButton />
</div>
</header>
@@ -22,14 +25,20 @@
import AccountButton from "@/components/AccountButton.vue";
import PoolOverrideWarning from "@/components/PoolOverrideWarning.vue";
import TextLogo from "@/components/TextLogo.vue";
import UiButton from "@/components/ui/UiButton.vue";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import { useNavigationStore } from "@/stores/navigation.store";
import { useRouter } from "vue-router";
import { useUiStore } from "@/stores/ui.store";
import { faBars } from "@fortawesome/free-solid-svg-icons";
import { faBars, faDownload } from "@fortawesome/free-solid-svg-icons";
import { storeToRefs } from "pinia";
const router = useRouter();
const openXoaDeploy = () => router.push({ name: "xoa.deploy" });
const uiStore = useUiStore();
const { isMobile } = storeToRefs(uiStore);
const { isMobile, isDesktop } = storeToRefs(uiStore);
const navigationStore = useNavigationStore();
const { trigger: navigationTrigger } = storeToRefs(navigationStore);
@@ -62,5 +71,6 @@ const { trigger: navigationTrigger } = storeToRefs(navigationStore);
.right {
display: flex;
align-items: center;
gap: 2rem;
}
</style>

View File

@@ -13,6 +13,13 @@
:icon="faStar"
class="master-icon"
/>
<p
class="vm-count"
v-tooltip="$t('vm-running', { count: vmCount })"
v-if="isReady"
>
{{ vmCount }}
</p>
<InfraAction
:icon="isExpanded ? faAngleDown : faAngleUp"
@click="toggle()"
@@ -41,6 +48,7 @@ import {
} from "@fortawesome/free-solid-svg-icons";
import { useToggle } from "@vueuse/core";
import { computed } from "vue";
import { useVmCollection } from "@/stores/xen-api/vm.store";
const props = defineProps<{
hostOpaqueRef: XenApiHost["$ref"];
@@ -58,6 +66,12 @@ const isCurrentHost = computed(
() => props.hostOpaqueRef === uiStore.currentHostOpaqueRef
);
const [isExpanded, toggle] = useToggle(true);
const { recordsByHostRef, isReady } = useVmCollection();
const vmCount = computed(
() => recordsByHostRef.value.get(props.hostOpaqueRef)?.length ?? 0
);
</script>
<style lang="postcss" scoped>
@@ -74,4 +88,18 @@ const [isExpanded, toggle] = useToggle(true);
.master-icon {
color: var(--color-orange-world-base);
}
.vm-count {
font-size: smaller;
font-weight: bold;
display: inline-flex;
align-items: center;
justify-content: center;
width: var(--size);
height: var(--size);
color: var(--color-blue-scale-500);
border-radius: calc(var(--size) / 2);
background-color: var(--color-extra-blue-base);
--size: 2.3rem;
}
</style>

View File

@@ -0,0 +1,34 @@
<template>
<UiModal color="error" @submit="modal.approve()">
<ConfirmModalLayout :icon="faExclamationCircle">
<template #title>{{ $t("invalid-field") }}</template>
<template #default>
{{ message }}
</template>
<template #buttons>
<ModalApproveButton>
{{ $t("ok") }}
</ModalApproveButton>
</template>
</ConfirmModalLayout>
</UiModal>
</template>
<script lang="ts" setup>
import ConfirmModalLayout from "@/components/ui/modals/layouts/ConfirmModalLayout.vue";
import ModalApproveButton from "@/components/ui/modals/ModalApproveButton.vue";
import UiModal from "@/components/ui/modals/UiModal.vue";
import { IK_MODAL } from "@/types/injection-keys";
import { faExclamationCircle } from "@fortawesome/free-solid-svg-icons";
import { inject } from "vue";
defineProps<{
message: string;
}>();
const modal = inject(IK_MODAL)!;
</script>
<style lang="postcss" scoped></style>

View File

@@ -0,0 +1,18 @@
<template>
<pre class="ui-raw"><slot /></pre>
</template>
<script lang="ts" setup></script>
<style lang="postcss" scoped>
.ui-raw {
background-color: var(--color-blue-scale-400);
text-align: left;
overflow: auto;
max-width: 100%;
width: 48em;
padding: 0.5em;
border-radius: 8px;
line-height: 150%;
}
</style>

View File

@@ -7,7 +7,7 @@ import {
} from "@/libs/xapi-stats";
import type { XenApiHost, XenApiVm } from "@/libs/xen-api/xen-api.types";
import { type Pausable, promiseTimeout, useTimeoutPoll } from "@vueuse/core";
import { computed, type ComputedRef, onUnmounted, ref } from "vue";
import { computed, type ComputedRef, onUnmounted, ref, type Ref } from "vue";
export type Stat<T> = {
canBeExpired: boolean;
@@ -42,7 +42,7 @@ export default function useFetchStats<
T extends XenApiHost | XenApiVm,
S extends HostStats | VmStats = T extends XenApiHost ? HostStats : VmStats,
>(getStats: GetStats<T, S>, granularity: GRANULARITY): FetchedStats<T, S> {
const stats = ref<Map<string, Stat<S>>>(new Map());
const stats = ref(new Map()) as Ref<Map<string, Stat<S>>>;
const timestamp = ref<number[]>([0, 0]);
const abortController = new AbortController();

View File

@@ -15,7 +15,7 @@ type HostConfig = {
export const useHostPatches = (hosts: MaybeRefOrGetter<XenApiHost[]>) => {
const hostStore = useHostStore();
const configByHost = reactive(new Map<string, HostConfig>());
const configByHost = reactive(new Map()) as Map<string, HostConfig>;
const fetchHostPatches = async (hostRef: XenApiHost["$ref"]) => {
if (!configByHost.has(hostRef)) {

View File

@@ -1,11 +1,11 @@
import { computed, ref, unref } from "vue";
import type { MaybeRef } from "@vueuse/core";
import { computed, ref, type Ref, unref } from "vue";
export default function useMultiSelect<T>(
usableIds: MaybeRef<T[]>,
selectableIds?: MaybeRef<T[]>
) {
const $selected = ref<Set<T>>(new Set());
const $selected = ref(new Set()) as Ref<Set<T>>;
const selected = computed({
get() {

View File

@@ -54,6 +54,7 @@ type ObjectTypeToRecordMapping = {
host: XenApiHost;
host_metrics: XenApiHostMetrics;
message: XenApiMessage<any>;
network: XenApiNetwork;
pool: XenApiPool;
sr: XenApiSr;
vm: XenApiVm;
@@ -113,9 +114,11 @@ export interface XenApiHost extends XenApiRecord<"host"> {
}
export interface XenApiSr extends XenApiRecord<"sr"> {
content_type: string;
name_label: string;
physical_size: number;
physical_utilisation: number;
shared: boolean;
}
export interface XenApiVm extends XenApiRecord<"vm"> {

View File

@@ -1,9 +1,13 @@
{
"about": "About",
"access-xoa": "Access XOA",
"add": "Add",
"add-filter": "Add filter",
"add-or": "+OR",
"add-sort": "Add sort",
"admin-login": "Admin login",
"admin-password": "Admin password",
"admin-password-confirm": "Confirm admin password",
"alarm-type": {
"cpu_usage": "CPU usage exceeds {n}%",
"disk_usage": "Disk usage exceeds {n}%",
@@ -26,12 +30,14 @@
"backup": "Backup",
"cancel": "Cancel",
"change-state": "Change state",
"check-errors": "Check out the errors:",
"click-to-display-alarms": "Click to display alarms:",
"click-to-return-default-pool": "Click here to return to the default pool",
"close": "Close",
"coming-soon": "Coming soon!",
"community": "Community",
"community-name": "{name} community",
"configuration": "Configuration",
"confirm-cancel": "Are you sure you want to cancel?",
"confirm-delete": "You're about to delete {0}",
"console": "Console",
@@ -43,14 +49,28 @@
"dashboard": "Dashboard",
"delete": "Delete",
"delete-vms": "Delete 1 VM | Delete {n} VMs",
"deploy": "Deploy",
"deploy-xoa": "Deploy XOA",
"deploy-xoa-available-on-desktop": "XOA deployment is available on your desktop interface",
"deploy-xoa-status": {
"configuring": "Configuring XOA…",
"importing": "Importing XOA…",
"not-responding": "XOA is not responding",
"ready": "XOA is ready!",
"starting": "Starting XOA…",
"waiting": "Waiting for XOA to respond…"
},
"descending": "descending",
"description": "Description",
"dhcp": "DHCP",
"disabled": "Disabled",
"display": "Display",
"dns": "DNS",
"do-you-have-needs": "You have needs and/or expectations? Let us know",
"documentation": "Documentation",
"documentation-name": "{name} documentation",
"edit-config": "Edit config",
"enabled": "Enabled",
"error-no-data": "Error, can't collect data.",
"error-occurred": "An error has occurred",
"export": "Export",
@@ -84,11 +104,16 @@
"force-shutdown": "Force shutdown",
"fullscreen": "Fullscreen",
"fullscreen-leave": "Leave fullscreen",
"gateway": "Gateway",
"n-gb-left": "{n} GB left",
"n-gb-required": "{n} GB required",
"go-back": "Go back",
"gzip": "gzip",
"here": "Here",
"hosts": "Hosts",
"invalid-field": "Invalid field",
"keep-me-logged": "Keep me logged in",
"keep-page-open": "Do not refresh or quit tab before end of deployment.",
"language": "Language",
"last-week": "Last week",
"learn-more": "Learn more",
@@ -104,6 +129,7 @@
"n-missing": "{n} missing",
"n-vms": "1 VM | {n} VMs",
"name": "Name",
"netmask": "Netmask",
"network": "Network",
"network-download": "Download",
"network-throughput": "Network throughput",
@@ -119,6 +145,7 @@
"not-found": "Not found",
"object": "Object",
"object-not-found": "Object {id} can't be found…",
"ok": "OK",
"on-object": "on {object}",
"open-console-in-new-tab": "Open console in new tab",
"or": "Or",
@@ -154,14 +181,23 @@
"selected-vms-in-execution": "Some selected VMs are running",
"send-ctrl-alt-del": "Send Ctrl+Alt+Del",
"send-us-feedback": "Send us feedback",
"select": {
"network": "Select a network",
"storage": "Select a storage"
},
"settings": "Settings",
"shutdown": "Shutdown",
"snapshot": "Snapshot",
"sort-by": "Sort by",
"ssh-account": "SSH account",
"ssh-login": "SSH login",
"ssh-password": "SSH password",
"ssh-password-confirm": "Confirm SSH password",
"stacked-cpu-usage": "Stacked CPU usage",
"stacked-ram-usage": "Stacked RAM usage",
"start": "Start",
"start-on-host": "Start on specific host",
"static-ip": "Static IP",
"stats": "Stats",
"status": "Status",
"storage": "Storage",
@@ -191,8 +227,18 @@
"vcpus-used": "vCPUs used",
"version": "Version",
"vm-is-running": "The VM is running",
"vm-running": "VM running | VMs running",
"vms": "VMs",
"xo-lite-under-construction": "XOLite is under construction",
"xoa-admin-account": "XOA admin account",
"xoa-deploy": "XOA deployment",
"xoa-deploy-failed": "Sorry, deployment failed!",
"xoa-deploy-retry": "Try again to deploy XOA",
"xoa-deploy-successful": "XOA deployment successful!",
"xoa-ip": "XOA IP address",
"xoa-password-confirm-different": "XOA password confirmation is different",
"xoa-ssh-account": "XOA SSH account",
"xoa-ssh-password-confirm-different": "SSH password confirmation is different",
"you-are-currently-on": "You are currently on: {0}",
"zstd": "zstd"
}

View File

@@ -1,9 +1,13 @@
{
"about": "À propos",
"access-xoa": "Accéder à la XOA",
"add": "Ajouter",
"add-filter": "Ajouter un filtre",
"add-or": "+OU",
"add-sort": "Ajouter un tri",
"admin-login": "Nom d'utilisateur administrateur",
"admin-password": "Mot de passe administrateur",
"admin-password-confirm": "Confirmer le mot de passe administrateur",
"alarm-type": {
"cpu_usage": "L'utilisation du CPU dépasse {n}%",
"disk_usage": "L'utilisation du disque dépasse {n}%",
@@ -26,12 +30,14 @@
"backup": "Sauvegarde",
"cancel": "Annuler",
"change-state": "Changer l'état",
"check-errors": "Consultez les erreurs :",
"click-to-display-alarms": "Cliquer pour afficher les alarmes :",
"click-to-return-default-pool": "Cliquer ici pour revenir au pool par défaut",
"close": "Fermer",
"coming-soon": "Bientôt disponible !",
"community": "Communauté",
"community-name": "Communauté {name}",
"configuration": "Configuration",
"confirm-cancel": "Êtes-vous sûr de vouloir annuler ?",
"confirm-delete": "Vous êtes sur le point de supprimer {0}",
"console": "Console",
@@ -43,14 +49,28 @@
"dashboard": "Tableau de bord",
"delete": "Supprimer",
"delete-vms": "Supprimer 1 VM | Supprimer {n} VMs",
"deploy": "Déployer",
"deploy-xoa": "Déployer XOA",
"deploy-xoa-available-on-desktop": "Le déploiement de la XOA est disponible sur ordinateur",
"deploy-xoa-status": {
"configuring": "Configuration de la XOA…",
"importing": "Importation de la XOA…",
"not-responding": "La XOA ne répond pas",
"ready": "La XOA est prête !",
"starting": "Démarrage de la XOA…",
"waiting": "En attente de réponse de la XOA…"
},
"descending": "descendant",
"description": "Description",
"dhcp": "DHCP",
"dns": "DNS",
"disabled": "Désactivé",
"display": "Affichage",
"do-you-have-needs": "Vous avez des besoins et/ou des attentes ? Faites le nous savoir",
"documentation": "Documentation",
"documentation-name": "Documentation {name}",
"edit-config": "Modifier config",
"enabled": "Activé",
"error-no-data": "Erreur, impossible de collecter les données.",
"error-occurred": "Une erreur est survenue",
"export": "Exporter",
@@ -84,11 +104,16 @@
"force-shutdown": "Forcer l'arrêt",
"fullscreen": "Plein écran",
"fullscreen-leave": "Quitter plein écran",
"gateway": "Passerelle",
"n-gb-left": "{n} Go libres",
"n-gb-required": "{n} Go requis",
"go-back": "Revenir en arrière",
"gzip": "gzip",
"here": "Ici",
"hosts": "Hôtes",
"invalid-field": "Champ invalide",
"keep-me-logged": "Rester connecté",
"keep-page-open": "Ne pas rafraichir ou quitter cette page avant la fin du déploiement.",
"language": "Langue",
"last-week": "Semaine dernière",
"learn-more": "En savoir plus",
@@ -104,6 +129,7 @@
"n-missing": "{n} manquant | {n} manquants",
"n-vms": "1 VM | {n} VMs",
"name": "Nom",
"netmask": "Masque réseau",
"network": "Réseau",
"network-download": "Descendant",
"network-throughput": "Débit du réseau",
@@ -119,6 +145,7 @@
"not-found": "Non trouvé",
"object": "Objet",
"object-not-found": "L'objet {id} est introuvable…",
"ok": "OK",
"on-object": "sur {object}",
"open-console-in-new-tab": "Ouvrir la console dans un nouvel onglet",
"or": "Ou",
@@ -154,14 +181,23 @@
"selected-vms-in-execution": "Certaines VMs sélectionnées sont en cours d'exécution",
"send-ctrl-alt-del": "Envoyer Ctrl+Alt+Suppr",
"send-us-feedback": "Envoyez-nous vos commentaires",
"select": {
"network": "Sélectionner un réseau",
"storage": "Sélectionner un SR"
},
"settings": "Paramètres",
"shutdown": "Arrêter",
"snapshot": "Instantané",
"sort-by": "Trier par",
"ssh-account": "Compte SSH",
"ssh-login": "Nom d'utilisateur SSH",
"ssh-password": "Mot de passe SSH",
"ssh-password-confirm": "Confirmer le mot de passe SSH",
"stacked-cpu-usage": "Utilisation CPU empilée",
"stacked-ram-usage": "Utilisation RAM empilée",
"start": "Démarrer",
"start-on-host": "Démarrer sur un hôte spécifique",
"static-ip": "IP statique",
"stats": "Stats",
"status": "Statut",
"storage": "Stockage",
@@ -191,8 +227,18 @@
"vcpus-used": "vCPUs utilisés",
"version": "Version",
"vm-is-running": "La VM est en cours d'exécution",
"vm-running": "VM en cours d'exécution | VMs en cours d'exécution",
"vms": "VMs",
"xo-lite-under-construction": "XOLite est en construction",
"xoa-admin-account": "Compte administrateur de la XOA",
"xoa-deploy": "Déploiement de la XOA",
"xoa-deploy-failed": "Erreur lors du déploiement de la XOA !",
"xoa-deploy-retry": "Ré-essayer de déployer une XOA",
"xoa-deploy-successful": "XOA deployée avec succès !",
"xoa-ip": "XOA IP address",
"xoa-password-confirm-different": "La confirmation du mot de passe XOA est différente",
"xoa-ssh-account": "Compte SSH de la XOA",
"xoa-ssh-password-confirm-different": "La confirmation du mot de passe SSH est différente",
"you-are-currently-on": "Vous êtes actuellement sur : {0}",
"zstd": "zstd"
}

View File

@@ -12,6 +12,11 @@ const router = createRouter({
name: "home",
component: HomeView,
},
{
path: "/xoa-deploy",
name: "xoa.deploy",
component: () => import("@/views/xoa-deploy/XoaDeployView.vue"),
},
{
path: "/settings",
name: "settings",

View File

@@ -0,0 +1,9 @@
import { useXenApiStoreSubscribableContext } from "@/composables/xen-api-store-subscribable-context.composable";
import { createUseCollection } from "@/stores/xen-api/create-use-collection";
import { defineStore } from "pinia";
export const useNetworkStore = defineStore("xen-api-network", () => {
return useXenApiStoreSubscribableContext("network");
});
export const useNetworkCollection = createUseCollection(useNetworkStore);

View File

@@ -0,0 +1,674 @@
<template>
<TitleBar :icon="faDownload">{{ $t("deploy-xoa") }}</TitleBar>
<div v-if="deploying" class="status">
<img src="@/assets/xo.svg" width="300" alt="Xen Orchestra" />
<!-- Error -->
<template v-if="error !== undefined">
<div>
<h2>{{ $t("xoa-deploy-failed") }}</h2>
<UiIcon :icon="faExclamationCircle" class="danger" />
</div>
<div class="error">
<strong>{{ $t("check-errors") }}</strong>
<UiRaw>{{ error }}</UiRaw>
</div>
<UiButton :icon="faDownload" @click="resetValues()">
{{ $t("xoa-deploy-retry") }}
</UiButton>
</template>
<!-- Success -->
<template v-else-if="url !== undefined">
<div>
<h2>{{ $t("xoa-deploy-successful") }}</h2>
<UiIcon :icon="faCircleCheck" class="success" />
</div>
<UiButton :icon="faArrowUpRightFromSquare" @click="openXoa">
{{ $t("access-xoa") }}
</UiButton>
</template>
<!-- Deploying -->
<template v-else>
<div>
<h2>{{ $t("xoa-deploy") }}</h2>
<!-- TODO: add progress bar -->
<p>{{ status }}</p>
</div>
<p class="warning">
<UiIcon :icon="faExclamationCircle" />
{{ $t("keep-page-open") }}
</p>
<UiButton
:disabled="vmRef === undefined"
color="error"
outlined
@click="cancel()"
>
{{ $t("cancel") }}
</UiButton>
</template>
</div>
<div v-else-if="isMobile" class="not-available">
<p>{{ $t("deploy-xoa-available-on-desktop") }}</p>
</div>
<div v-else class="card-view">
<UiCard>
<form @submit.prevent="deploy">
<FormSection :label="$t('configuration')">
<div class="row">
<FormInputWrapper
:label="$t('storage')"
:help="$t('n-gb-required', { n: REQUIRED_GB })"
>
<FormSelect v-model="selectedSr" required>
<option disabled :value="undefined">
{{ $t("select.storage") }}
</option>
<option
v-for="sr in filteredSrs"
:value="sr"
:key="sr.uuid"
:class="
sr.physical_size - sr.physical_utilisation <
REQUIRED_GB * 1024 ** 3
? 'warning'
: 'success'
"
>
{{ sr.name_label }} -
{{
$t("n-gb-left", {
n: Math.round(
(sr.physical_size - sr.physical_utilisation) / 1024 ** 3
),
})
}}
<span
v-if="
sr.physical_size - sr.physical_utilisation <
REQUIRED_GB * 1024 ** 3
"
>⚠️</span
>
</option>
</FormSelect>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper :label="$t('network')" required>
<FormSelect v-model="selectedNetwork" required>
<option disabled :value="undefined">
{{ $t("select.network") }}
</option>
<option
v-for="network in filteredNetworks"
:value="network"
:key="network.uuid"
>
{{ network.name_label }}
</option>
</FormSelect>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper>
<div class="radio-group">
<label
><FormRadio value="static" v-model="ipStrategy" />{{
$t("static-ip")
}}</label
>
<label
><FormRadio value="dhcp" v-model="ipStrategy" />{{
$t("dhcp")
}}</label
>
</div>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper
:label="$t('xoa-ip')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#network-configuration"
>
<FormInput
v-model="ip"
:disabled="!requireIpConf"
placeholder="xxx.xxx.xxx.xxx"
/>
</FormInputWrapper>
<FormInputWrapper
:label="$t('netmask')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#network-configuration"
>
<FormInput
v-model="netmask"
:disabled="!requireIpConf"
placeholder="255.255.255.0"
/>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper
:label="$t('dns')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#network-configuration"
>
<FormInput
v-model="dns"
:disabled="!requireIpConf"
placeholder="8.8.8.8"
/>
</FormInputWrapper>
<FormInputWrapper
:label="$t('gateway')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#network-configuration"
>
<FormInput
v-model="gateway"
:disabled="!requireIpConf"
placeholder="xxx.xxx.xxx.xxx"
/>
</FormInputWrapper>
</div>
</FormSection>
<FormSection :label="$t('xoa-admin-account')">
<div class="row">
<FormInputWrapper
:label="$t('admin-login')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#default-xo-account"
>
<FormInput
v-model="xoaUser"
required
placeholder="email@example.com"
/>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper
:label="$t('admin-password')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#default-xo-account"
>
<FormInput
type="password"
v-model="xoaPwd"
required
:placeholder="$t('password')"
/>
</FormInputWrapper>
<FormInputWrapper
:label="$t('admin-password-confirm')"
learnMoreUrl="https://xen-orchestra.com/docs/xoa.html#default-xo-account"
>
<FormInput
type="password"
v-model="xoaPwdConfirm"
required
:placeholder="$t('password')"
/>
</FormInputWrapper>
</div>
</FormSection>
<FormSection :label="$t('xoa-ssh-account')">
<div class="row">
<FormInputWrapper :label="$t('ssh-account')">
<label
><span>{{ $t("disabled") }}</span
><FormToggle v-model="enableSshAccount" /><span>{{
$t("enabled")
}}</span></label
>
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper :label="$t('ssh-login')">
<FormInput value="xoa" placeholder="xoa" disabled />
</FormInputWrapper>
</div>
<div class="row">
<FormInputWrapper :label="$t('ssh-password')">
<FormInput
type="password"
v-model="sshPwd"
:placeholder="$t('password')"
:disabled="!enableSshAccount"
:required="enableSshAccount"
/>
</FormInputWrapper>
<FormInputWrapper :label="$t('ssh-password-confirm')">
<FormInput
type="password"
v-model="sshPwdConfirm"
:placeholder="$t('password')"
:disabled="!enableSshAccount"
:required="enableSshAccount"
/>
</FormInputWrapper>
</div>
</FormSection>
<UiButtonGroup>
<UiButton outlined @click="router.back()">
{{ $t("cancel") }}
</UiButton>
<UiButton type="submit">
{{ $t("deploy") }}
</UiButton>
</UiButtonGroup>
</form>
</UiCard>
</div>
</template>
<script lang="ts" setup>
import { computed, ref } from "vue";
import {
faArrowUpRightFromSquare,
faCircleCheck,
faDownload,
faExclamationCircle,
} from "@fortawesome/free-solid-svg-icons";
import { storeToRefs } from "pinia";
import { useI18n } from "vue-i18n";
import { useModal } from "@/composables/modal.composable";
import { useNetworkCollection } from "@/stores/xen-api/network.store";
import { usePageTitleStore } from "@/stores/page-title.store";
import { useRouter } from "vue-router";
import { useSrCollection } from "@/stores/xen-api/sr.store";
import { useUiStore } from "@/stores/ui.store";
import { useXenApiStore } from "@/stores/xen-api.store";
import type { XenApiNetwork, XenApiSr } from "@/libs/xen-api/xen-api.types";
import FormInput from "@/components/form/FormInput.vue";
import FormInputWrapper from "@/components/form/FormInputWrapper.vue";
import FormRadio from "@/components/form/FormRadio.vue";
import FormSection from "@/components/form/FormSection.vue";
import FormSelect from "@/components/form/FormSelect.vue";
import FormToggle from "@/components/form/FormToggle.vue";
import TitleBar from "@/components/TitleBar.vue";
import UiButton from "@/components/ui/UiButton.vue";
import UiButtonGroup from "@/components/ui/UiButtonGroup.vue";
import UiCard from "@/components/ui/UiCard.vue";
import UiIcon from "@/components/ui/icon/UiIcon.vue";
import UiRaw from "@/components/ui/UiRaw.vue";
const REQUIRED_GB = 20;
const { t } = useI18n();
const router = useRouter();
usePageTitleStore().setTitle(() => t("deploy-xoa"));
const invalidField = (message: string) =>
useModal(() => import("@/components/modals/InvalidFieldModal.vue"), {
message,
});
const uiStore = useUiStore();
const { isMobile } = storeToRefs(uiStore);
const xapi = useXenApiStore().getXapi();
const { records: srs } = useSrCollection();
const filteredSrs = computed(() =>
srs.value
.filter((sr) => sr.content_type !== "iso" && sr.physical_size > 0)
// Sort: shared first then largest free space first
.sort((sr1, sr2) => {
if (sr1.shared === sr2.shared) {
return (
sr2.physical_size -
sr2.physical_utilisation -
(sr1.physical_size - sr1.physical_utilisation)
);
} else {
return sr1.shared ? -1 : 1;
}
})
);
const { records: networks } = useNetworkCollection();
const filteredNetworks = computed(() =>
[...networks.value].sort((network1, network2) =>
network1.name_label < network2.name_label ? -1 : 1
)
);
const deploying = ref(false);
const status = ref<string | undefined>();
const error = ref<string | undefined>();
const url = ref<string | undefined>();
const vmRef = ref<string | undefined>();
const resetValues = () => {
deploying.value = false;
status.value = undefined;
error.value = undefined;
url.value = undefined;
vmRef.value = undefined;
};
const openXoa = () => {
window.open(url.value, "_blank", "noopener");
};
const selectedSr = ref<XenApiSr>();
const selectedNetwork = ref<XenApiNetwork>();
const ipStrategy = ref<"static" | "dhcp">("dhcp");
const requireIpConf = computed(() => ipStrategy.value === "static");
const ip = ref("");
const netmask = ref("");
const dns = ref("");
const gateway = ref("");
const xoaUser = ref("");
const xoaPwd = ref("");
const xoaPwdConfirm = ref("");
const enableSshAccount = ref(true);
const sshPwd = ref("");
const sshPwdConfirm = ref("");
async function deploy() {
if (selectedSr.value === undefined || selectedNetwork.value === undefined) {
// Should not happen
console.error("SR or network is undefined");
return;
}
if (
ipStrategy.value === "static" &&
(ip.value === "" ||
netmask.value === "" ||
dns.value === "" ||
gateway.value === "")
) {
// Should not happen
console.error("Missing IP config");
return;
}
if (xoaUser.value === "" || xoaPwd.value === "") {
// Should not happen
console.error("Missing XOA credentials");
return;
}
if (xoaPwd.value !== xoaPwdConfirm.value) {
// TODO: use formal validation system
invalidField(t("xoa-password-confirm-different"));
return;
}
if (enableSshAccount.value && sshPwd.value === "") {
// Should not happen
console.error("Missing XOA credentials");
return;
}
if (enableSshAccount.value && sshPwd.value !== sshPwdConfirm.value) {
// TODO: use form validation system
invalidField(t("xoa-ssh-password-confirm-different"));
return;
}
deploying.value = true;
try {
status.value = t("deploy-xoa-status.importing");
vmRef.value = (
(await xapi.call("VM.import", [
"http://xoa.io:8888/",
selectedSr.value.$ref,
false, // full_restore
false, // force
])) as string[]
)[0];
status.value = t("deploy-xoa-status.configuring");
const [vifRef] = (await xapi.call("VM.get_VIFs", [
vmRef.value,
])) as string[];
await xapi.call("VIF.destroy", [vifRef]);
if (!deploying.value) {
return;
}
const [device] = (await xapi.call("VM.get_allowed_VIF_devices", [
vmRef.value,
])) as string[];
await xapi.call("VIF.create", [
{
device,
MAC: "",
MTU: selectedNetwork.value.MTU,
network: selectedNetwork.value.$ref,
other_config: {},
qos_algorithm_params: {},
qos_algorithm_type: "",
VM: vmRef.value,
},
]);
if (!deploying.value) {
return;
}
const promises = [
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/admin-account",
JSON.stringify({ email: xoaUser.value, password: xoaPwd.value }),
]),
];
// TODO: add host to servers with session token?
if (ipStrategy.value === "static") {
promises.push(
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/ip",
ip.value,
]),
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/netmask",
netmask.value,
]),
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/gateway",
gateway.value,
]),
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/dns",
dns.value,
])
);
}
if (enableSshAccount.value) {
promises.push(
xapi.call("VM.add_to_xenstore_data", [
vmRef.value,
"vm-data/system-account-xoa-password",
sshPwd.value,
])
);
}
await Promise.all(promises);
if (!deploying.value) {
return;
}
status.value = t("deploy-xoa-status.starting");
await xapi.call("VM.start", [
vmRef.value,
false, // start_paused
false, // force
]);
if (!deploying.value) {
return;
}
status.value = t("deploy-xoa-status.waiting");
const metricsRef = await xapi.call("VM.get_guest_metrics", [vmRef.value]);
let attempts = 120;
let networks: { "0/ip": string } | undefined;
await new Promise((resolve) => setTimeout(resolve, 10e3)); // Sleep 10s
do {
await new Promise((resolve) => setTimeout(resolve, 1e3)); // Sleep 1s
networks = await xapi.call("VM_guest_metrics.get_networks", [metricsRef]);
if (!deploying.value) {
return;
}
} while (--attempts > 0 && networks?.["0/ip"] === undefined);
if (attempts === 0 || networks === undefined) {
status.value = t("deploy-xoa-status.not-responding");
return;
}
await Promise.all(
[
"admin-account",
"dns",
"gateway",
"ip",
"netmask",
"xoa-updater-credentials",
].map((key) =>
xapi.call("VM.remove_from_xenstore_data", [
vmRef.value,
`vm-data/${key}`,
])
)
);
status.value = t("deploy-xoa-status.ready");
// TODO: handle IPv6
url.value = `https://${networks["0/ip"]}`;
} catch (err: any) {
console.error(err);
error.value = err?.message ?? err?.code ?? "Unknown error";
}
}
async function cancel() {
const _vmRef = vmRef.value;
console.log("_vmRef:", _vmRef);
resetValues();
if (_vmRef !== undefined) {
try {
await xapi.call("VM.destroy", [_vmRef]);
} catch (err) {
console.error(err);
}
}
}
</script>
<style lang="postcss" scoped>
.card-view {
flex-direction: column;
}
.row {
width: 100%;
display: flex;
flex-wrap: wrap;
column-gap: 10rem;
}
.form-toggle {
margin: 0 1.5rem;
}
.form-input-wrapper {
flex-grow: 1;
min-width: 60rem;
}
.input-container * {
vertical-align: middle;
}
.radio-group {
display: flex;
flex-direction: row;
margin: 1.67rem 0;
& > * {
min-width: 20rem;
}
}
.form-radio {
margin-right: 1rem;
}
.not-available,
.status {
display: flex;
flex-direction: column;
gap: 42px;
justify-content: center;
align-items: center;
min-height: 76.5vh;
color: var(--color-extra-blue-base);
text-align: center;
padding: 5rem;
margin: auto;
h2 {
margin-bottom: 1rem;
}
* {
max-width: 100%;
}
}
.not-available {
font-size: 2rem;
}
.status {
color: var(--color-blue-scale-100);
}
.success {
color: var(--color-green-infra-base);
}
.danger {
color: var(--color-red-vates-base);
}
.success,
.danger {
&.ui-icon {
font-size: 3rem;
}
}
.error {
display: flex;
flex-direction: column;
text-align: left;
gap: 0.5em;
}
.warning {
color: var(--color-orange-world-base);
}
</style>

View File

@@ -15,7 +15,7 @@ import { Readable } from 'stream'
import { RemoteAdapter } from '@xen-orchestra/backups/RemoteAdapter.mjs'
import { RestoreMetadataBackup } from '@xen-orchestra/backups/RestoreMetadataBackup.mjs'
import { runBackupWorker } from '@xen-orchestra/backups/runBackupWorker.mjs'
import { Task } from '@xen-orchestra/backups/Task.mjs'
import { Task } from '@vates/task'
import { Xapi } from '@xen-orchestra/xapi'
const noop = Function.prototype
@@ -122,15 +122,15 @@ export default class Backups {
try {
await Task.run(
{
name: 'backup run',
data: {
properties: {
jobId: job.id,
jobName: job.name,
mode: job.mode,
name: 'backup run',
reportWhen: job.settings['']?.reportWhen,
scheduleId: schedule.id,
},
onLog,
onProgress: onLog,
},
() => run(params)
)
@@ -205,14 +205,14 @@ export default class Backups {
async (args, onLog) =>
Task.run(
{
data: {
properties: {
backupId,
jobId: metadata.jobId,
name: 'restore',
srId: srUuid,
time: metadata.timestamp,
},
name: 'restore',
onLog,
onProgress: onLog,
},
run
).catch(() => {}), // errors are handled by logs,
@@ -344,12 +344,14 @@ export default class Backups {
({ backupId, remote, xapi: xapiOptions }) =>
Disposable.use(app.remotes.getHandler(remote), xapiOptions && this.getXapi(xapiOptions), (handler, xapi) =>
runWithLogs(
async (args, onLog) =>
async (args, onProgress) =>
Task.run(
{
name: 'metadataRestore',
data: JSON.parse(String(await handler.readFile(`${backupId}/metadata.json`))),
onLog,
properties: {
metadata: JSON.parse(String(await handler.readFile(`${backupId}/metadata.json`))),
name: 'metadataRestore',
},
onProgress,
},
() =>
new RestoreMetadataBackup({

View File

@@ -31,6 +31,7 @@
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/disposable": "^0.1.5",
"@vates/task": "^0.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.44.3",
"@xen-orchestra/fs": "^4.1.3",

View File

@@ -10,7 +10,7 @@
"@xen-orchestra/log": "^0.6.0",
"lodash": "^4.17.21",
"node-fetch": "^3.3.0",
"vhd-lib": "^4.8.0"
"vhd-lib": "^4.9.0"
},
"engines": {
"node": ">=14"

View File

@@ -1,4 +1,5 @@
export { default as host } from './host.mjs'
export { default as pool } from './pool.mjs'
export { default as SR } from './sr.mjs'
export { default as task } from './task.mjs'
export { default as VBD } from './vbd.mjs'

View File

@@ -3,7 +3,6 @@ import { asyncMap } from '@xen-orchestra/async-map'
import { decorateClass } from '@vates/decorate-with'
import { defer } from 'golike-defer'
import { incorrectState, operationFailed } from 'xo-common/api-errors.js'
import pRetry from 'promise-toolbox/retry'
import { getCurrentVmUuid } from './_XenStore.mjs'
@@ -11,7 +10,7 @@ const waitAgentRestart = (xapi, hostRef, prevAgentStartTime) =>
new Promise(resolve => {
// even though the ref could change in case of pool master restart, tests show it stays the same
const stopWatch = xapi.watchObject(hostRef, host => {
if (+host.other_config.agent_start_time > prevAgentStartTime) {
if (+host.other_config.agent_start_time > prevAgentStartTime && host.enabled) {
stopWatch()
resolve()
}
@@ -35,6 +34,11 @@ class Host {
* @param {string} ref - Opaque reference of the host
*/
async smartReboot($defer, ref, bypassBlockedSuspend = false, bypassCurrentVmCheck = false) {
await this.callAsync('host.disable', ref)
// host may have been re-enabled already, this is not an problem
$defer.onFailure(() => this.callAsync('host.enable', ref))
let currentVmRef
try {
currentVmRef = await this.call('VM.get_by_uuid', await getCurrentVmUuid())
@@ -67,19 +71,15 @@ class Host {
})
const suspendedVms = []
if (await this.getField('host', ref, 'enabled')) {
await this.callAsync('host.disable', ref)
$defer(async () => {
await pRetry(() => this.callAsync('host.enable', ref), {
delay: 10e3,
retries: 6,
when: { code: 'HOST_STILL_BOOTING' },
})
// Resuming VMs should occur after host enabling to avoid triggering a 'NO_HOSTS_AVAILABLE' error
return asyncEach(suspendedVms, vmRef => this.callAsync('VM.resume', vmRef, false, false))
})
}
// Resuming VMs should occur after host enabling to avoid triggering a 'NO_HOSTS_AVAILABLE' error
//
// The defers are running in reverse order.
$defer(() => asyncEach(suspendedVms, vmRef => this.callAsync('VM.resume', vmRef, false, false)))
$defer.onFailure(() =>
// if the host has not been rebooted, it might still be disabled and need to be enabled manually
this.callAsync('host.enable', ref)
)
await asyncEach(
residentVmRefs,

View File

@@ -26,6 +26,7 @@
"@vates/async-each": "^1.0.0",
"@vates/decorate-with": "^2.0.0",
"@vates/nbd-client": "^3.0.0",
"@vates/task": "^0.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.6.0",
"d3-time-format": "^4.1.0",
@@ -34,7 +35,7 @@
"json-rpc-protocol": "^0.13.2",
"lodash": "^4.17.15",
"promise-toolbox": "^0.21.0",
"vhd-lib": "^4.8.0",
"vhd-lib": "^4.9.0",
"xo-common": "^0.8.0"
},
"private": false,

View File

@@ -0,0 +1,106 @@
import { asyncEach } from '@vates/async-each'
import { createLogger } from '@xen-orchestra/log'
import { Task } from '@vates/task'
import { getCurrentVmUuid } from './_XenStore.mjs'
const noop = Function.prototype
async function pCatch(p, code, cb) {
try {
return await p
} catch (error) {
if (error.code === code) {
return cb(error)
}
throw error
}
}
const { warn } = createLogger('xo:xapi:pool')
export default class Pool {
async emergencyShutdown() {
const poolMasterRef = this.pool.master
let currentVmRef
try {
currentVmRef = await this.call('VM.get_by_uuid', await getCurrentVmUuid())
// try to move current VM on pool master
const hostRef = await this.call('VM.get_resident_on', currentVmRef)
if (hostRef !== poolMasterRef) {
await Task.run(
{
properties: {
name: 'Migrating current VM to pool master',
currentVm: { $ref: currentVmRef },
poolMaster: { $ref: poolMasterRef },
},
},
async () => {
await this.callAsync('VM.pool_migrate', currentVmRef, poolMasterRef, {})
}
).catch(noop)
}
} catch (error) {
warn(error)
}
await pCatch(this.call('pool.disable_ha'), 'HA_NOT_ENABLED', noop)
const hostRefs = await this.call('host.get_all')
// disable all hosts and suspend all VMs
await asyncEach(hostRefs, async hostRef => {
await this.call('host.disable', hostRef).catch(warn)
const [controlDomainRef, vmRefs] = await Promise.all([
this.call('host.get_control_domain', hostRef),
this.call('host.get_resident_VMs', hostRef),
])
await asyncEach(vmRefs, vmRef => {
// never stop current VM otherwise the emergencyShutdown process would be interrupted
if (vmRef !== currentVmRef && vmRef !== controlDomainRef) {
return Task.run(
{
properties: {
name: 'suspending VM',
host: { $ref: hostRef },
vm: { $ref: vmRef },
},
},
async () => {
await pCatch(this.callAsync('VM.suspend', vmRef), 'VM_BAD_POWER_STATE', noop)
}
).catch(noop)
}
})
})
const shutdownHost = ref =>
Task.run(
{
properties: {
name: 'shutting down host',
host: { $ref: ref },
},
},
async () => {
await this.callAsync('host.shutdown', ref)
}
).catch(noop)
// shutdown all non-pool master hosts
await asyncEach(hostRefs, hostRef => {
// pool master will be shutdown at the end
if (hostRef !== poolMasterRef) {
return shutdownHost(hostRef)
}
})
// shutdown pool master
await shutdownHost(poolMasterRef)
}
}

View File

@@ -4,16 +4,36 @@
### Enhancements
- [xo-cli] Supports NDJSON response for the `rest get` command (it also respects the `--json` flag) [Forum#69326](https://xcp-ng.org/forum/post/69326)
## Released packages
- xo-cli 0.24.0
## **5.90.0** (2023-12-29)
### Highlights
- [VDI] Create XAPI task during NBD export (PR [#7228](https://github.com/vatesfr/xen-orchestra/pull/7228))
- [Backup] Use multiple link to speedup NBD backup (PR [#7216](https://github.com/vatesfr/xen-orchestra/pull/7216))
- [VDI/Export] Expose NBD settings in the XO and REST APIs api (PR [#7251](https://github.com/vatesfr/xen-orchestra/pull/7251))
- [Tags] Implement scoped tags (PR [#7270](https://github.com/vatesfr/xen-orchestra/pull/7270))
- [HTTP] `http.useForwardedHeaders` setting can be enabled when XO is behind a reverse proxy to fetch clients IP addresses from `X-Forwarded-*` headers [Forum#67625](https://xcp-ng.org/forum/post/67625) (PR [#7233](https://github.com/vatesfr/xen-orchestra/pull/7233))
- [Plugin/auth-saml] Add _Force re-authentication_ setting [Forum#67764](https://xcp-ng.org/forum/post/67764) (PR [#7232](https://github.com/vatesfr/xen-orchestra/pull/7232))
- [VM] Trying to increase the memory of a running VM will now propose the option to automatically restart it and increasing its memory [#7069](https://github.com/vatesfr/xen-orchestra/issues/7069) (PR [#7244](https://github.com/vatesfr/xen-orchestra/pull/7244))
- [xo-cli] Explicit error when attempting to use REST API before being registered
- [REST API] _XO config & Pool metadata Backup_ jobs are available at `/backup/jobs/metadata`
- [REST API] _Mirror Backup_ jobs are available at `/backup/jobs/mirror`
- [Host/Network/PIF] Display and ability to edit IPv6 field [#5400](https://github.com/vatesfr/xen-orchestra/issues/5400) (PR [#7218](https://github.com/vatesfr/xen-orchestra/pull/7218))
- [SR] show an icon on SR during VDI coalescing (with XCP-ng 8.3+) (PR [#7241](https://github.com/vatesfr/xen-orchestra/pull/7241))
### Enhancements
- [Forget SR] Changed the modal message and added a confirmation text to be sure the action is understood by the user [#7148](https://github.com/vatesfr/xen-orchestra/issues/7148) (PR [#7155](https://github.com/vatesfr/xen-orchestra/pull/7155))
- [REST API] `/backups` has been renamed to `/backup` (redirections are in place for compatibility)
- [REST API] _VM backup & Replication_ jobs have been moved from `/backup/jobs/:id` to `/backup/jobs/vm/:id` (redirections are in place for compatibility)
- [REST API] _XO config & Pool metadata Backup_ jobs are available at `/backup/jobs/metadata`
- [REST API] _Mirror Backup_ jobs are available at `/backup/jobs/mirror`
- [Plugin/auth-saml] Add _Force re-authentication_ setting [Forum#67764](https://xcp-ng.org/forum/post/67764) (PR [#7232](https://github.com/vatesfr/xen-orchestra/pull/7232))
- [HTTP] `http.useForwardedHeaders` setting can be enabled when XO is behind a reverse proxy to fetch clients IP addresses from `X-Forwarded-*` headers [Forum#67625](https://xcp-ng.org/forum/post/67625) (PR [#7233](https://github.com/vatesfr/xen-orchestra/pull/7233))
- [Backup]Use multiple link to speedup NBD backup (PR [#7216](https://github.com/vatesfr/xen-orchestra/pull/7216))
- [Backup] Show if disk is differential or full in incremental backups (PR [#7222](https://github.com/vatesfr/xen-orchestra/pull/7222))
- [VDI] Create XAPI task during NBD export (PR [#7228](https://github.com/vatesfr/xen-orchestra/pull/7228))
- [Menu/Proxies] Added a warning icon if unable to check proxies upgrade (PR [#7237](https://github.com/vatesfr/xen-orchestra/pull/7237))
### Bug fixes
@@ -24,19 +44,21 @@
- [Backup] Reduce memory consumption when using NBD (PR [#7216](https://github.com/vatesfr/xen-orchestra/pull/7216))
- [Mirror backup] Fix _Report when_ setting being reset to _Failure_ when editing backup job (PR [#7235](https://github.com/vatesfr/xen-orchestra/pull/7235))
- [RPU] VMs are correctly migrated to their original host (PR [#7238](https://github.com/vatesfr/xen-orchestra/pull/7238))
- [Backup/Report] Missing report for Mirror Backup (PR [#7254](https://github.com/vatesfr/xen-orchestra/pull/7254))
### Released packages
- vhd-lib 4.8.0
- @vates/nbd-client 3.0.0
- @xen-orchestra/xapi 4.1.0
- @xen-orchestra/backups 0.44.3
- @xen-orchestra/proxy 0.26.42
- xo-server 5.130.0
- xo-server-auth-saml 0.11.0
- xo-server-transport-email 1.0.0
- xo-server-transport-slack 0.0.1
- xo-web 5.130.1
- xo-cli 0.23.0
- vhd-lib 4.9.0
- xo-server 5.132.0
- xo-web 5.133.0
## **5.89.0** (2023-11-30)

View File

@@ -7,17 +7,23 @@
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [SR] show an icon on SR during VDI coalescing (with XCP-ng 8.3+) (PR [#7241](https://github.com/vatesfr/xen-orchestra/pull/7241))
- [VDI/Export] Expose NBD settings in the XO and REST APIs api (PR [#7251](https://github.com/vatesfr/xen-orchestra/pull/7251))
- [Menu/Proxies] Added a warning icon if unable to check proxies upgrade (PR [#7237](https://github.com/vatesfr/xen-orchestra/pull/7237))
- [Settings/Logs] Use GitHub issue form with pre-filled fields when reporting a bug [#7142](https://github.com/vatesfr/xen-orchestra/issues/7142) (PR [#7274](https://github.com/vatesfr/xen-orchestra/pull/7274))
- [REST API] New pool action: `emergency_shutdown`, it suspends all the VMs and then shuts down all the host [#7277](https://github.com/vatesfr/xen-orchestra/issues/7277) (PR [#7279](https://github.com/vatesfr/xen-orchestra/pull/7279))
- [Tasks] Hide `/rrd_updates` tasks by default
### Bug fixes
- [Backup/Report] Missing report for Mirror Backup (PR [#7254](https://github.com/vatesfr/xen-orchestra/pull/7254))
> Users must be able to say: “I had this issue, happy to know it's fixed”
- [Proxies] Fix `this.getObject` is not a function during deployment
- [Settings/Logs] Fix `sr.getAllUnhealthyVdiChainsLength: not enough permissions` error with non-admin users (PR [#7265](https://github.com/vatesfr/xen-orchestra/pull/7265))
- [Settings/Logs] Fix `proxy.getAll: not enough permissions` error with non-admin users (PR [#7249](https://github.com/vatesfr/xen-orchestra/pull/7249))
- [Replication/Health Check] Fix `healthCheckVm.add_tag is not a function` error [Forum#69156](https://xcp-ng.org/forum/post/69156)
- [Plugin/load-balancer] Prevent unwanted migrations to hosts with low free memory (PR [#7288](https://github.com/vatesfr/xen-orchestra/pull/7288))
- Avoid unnecessary `pool.add_to_other_config: Duplicate key` error in XAPI log [Forum#68761](https://xcp-ng.org/forum/post/68761)
- [Jobs] Reset parameters when editing method to avoid invalid parameters on execution [Forum#69299](https://xcp-ng.org/forum/post/69299)
- [Metadata Backup] Fix `ENOENT` error when restoring an _XO Config_ backup [Forum#68999](https://xcp-ng.org/forum/post/68999)
### Packages to release
> When modifying a package, add it here with its release type.
@@ -34,8 +40,11 @@
<!--packages-start-->
- vhd-lib patch
- @xen-orchestra/backups patch
- @xen-orchestra/xapi minor
- xen-api patch
- xo-server minor
- xo-server-load-balancer patch
- xo-web minor
<!--packages-end-->

View File

@@ -1,46 +1,86 @@
# Contributor Covenant Code of Conduct
# Code of Conduct - Xen Orchestra
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
Examples of behavior that contributes to a positive environment for our
community include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior by participants include:
Examples of unacceptable behavior include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
## Enforcement Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at julien.fontanet@vates.fr. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
julien.fontanet@isonoe.net.
All complaints will be reviewed and investigated promptly and fairly.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
at [https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -24,7 +24,7 @@ Xen Orchestra itself is built as a modular solution. Each part has its role.
## xo-server (server)
The core is "[xo-server](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server/)" - a daemon dealing directly with XenServer or XAPI capable hosts. This is where users are stored, and it's the center point for talking to your whole Xen infrastructure.
The core is "[xo-server](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server/)" - a daemon dealing directly with XCP-ng/XenServer or XAPI capable hosts. This is where users are stored, and it's the center point for talking to your whole Xen infrastructure.
XO-Server is the core of Xen Orchestra. Its central role opens a lot of possibilities versus other solutions - let's see why.

View File

@@ -24,34 +24,34 @@ Nevertheless, there may be some reasons for XO to trigger a key (full) export in
## VDI chain protection
Backup jobs regularly delete snapshots. When a snapshot is deleted, either manually or via a backup job, it triggers the need for Xenserver to coalesce the VDI chain - to merge the remaining VDIs and base copies in the chain. This means generally we cannot take too many new snapshots on said VM until Xenserver has finished running a coalesce job on the VDI chain.
Backup jobs regularly delete snapshots. When a snapshot is deleted, either manually or via a backup job, it triggers the need for XCP-ng/XenServer to coalesce the VDI chain - to merge the remaining VDIs and base copies in the chain. This means generally we cannot take too many new snapshots on said VM until XCP-ng/XenServer has finished running a coalesce job on the VDI chain.
This mechanism and scheduling is handled by XenServer itself, not Xen Orchestra. But we can check your existing VDI chain and avoid creating more snapshots than your storage can merge. If we don't, this will lead to catastrophic consequences. Xen Orchestra is the **only** XenServer/XCP backup product that takes this into account and offers protection.
This mechanism and scheduling is handled by XCP-ng/XenServer itself, not Xen Orchestra. But we can check your existing VDI chain and avoid creating more snapshots than your storage can merge. If we don't, this will lead to catastrophic consequences. Xen Orchestra is the **only** XCP-ng/XenServer backup product that takes this into account and offers protection.
Without this detection, you could have 2 potential issues:
- `The Snapshot Chain is too Long`
- `SR_BACKEND_FAILURE_44 (insufficient space)`
The first issue is a chain that contains more than 30 elements (fixed XenServer limit), and the other one means it's full because the "coalesce" process couldn't keep up the pace and the storage filled up.
The first issue is a chain that contains more than 30 elements (fixed XCP-ng/XenServer limit), and the other one means it's full because the "coalesce" process couldn't keep up the pace and the storage filled up.
In the end, this message is a **protection mechanism preventing damage to your SR**. The backup job will fail, but XenServer itself should eventually automatically coalesce the snapshot chain, and the the next time the backup job should complete.
In the end, this message is a **protection mechanism preventing damage to your SR**. The backup job will fail, but XCP-ng/XenServer itself should eventually automatically coalesce the snapshot chain, and the the next time the backup job should complete.
Just remember this: **a coalesce should happen every time a snapshot is removed**.
> You can read more on this on our dedicated blog post regarding [XenServer coalesce detection](https://xen-orchestra.com/blog/xenserver-coalesce-detection-in-xen-orchestra/).
> You can read more on this on our dedicated blog post regarding [XCP-ng/XenServer coalesce detection](https://xen-orchestra.com/blog/xenserver-coalesce-detection-in-xen-orchestra/).
### Troubleshooting a constant VDI Chain Protection message (XenServer failure to coalesce)
### Troubleshooting a constant VDI Chain Protection message (XCP-ng/XenServer failure to coalesce)
As previously mentioned, this message can be normal and it just means XenServer needs to perform a coalesce to merge old snapshots. However if you repeatedly get this message and it seems XenServer is not coalescing, You can take a few steps to determine why.
As previously mentioned, this message can be normal and it just means XCP-ng/XenServer needs to perform a coalesce to merge old snapshots. However if you repeatedly get this message and it seems XCP-ng/XenServer is not coalescing, You can take a few steps to determine why.
First check SMlog on the XenServer host for messages relating to VDI corruption or coalesce job failure. For example, by running `cat /var/log/SMlog | grep -i exception` or `cat /var/log/SMlog | grep -i error` on the XenServer host with the affected storage.
First check SMlog on the XCP-ng/XenServer host for messages relating to VDI corruption or coalesce job failure. For example, by running `cat /var/log/SMlog | grep -i exception` or `cat /var/log/SMlog | grep -i error` on the XCP-ng/XenServer host with the affected storage.
Coalesce jobs can also fail to run if the SR does not have enough free space. Check the problematic SR and make sure it has enough free space, generally 30% or more free is recommended depending on VM size. You can check if this is the issue by searching `SMlog` with `grep -i coales /var/log/SMlog` (you may have to look at previous logs such as `SMlog.1`).
You can check if a coalesce job is currently active by running `ps axf | grep vhd` on the XenServer host and looking for a VHD process in the results (one of the resulting processes will be the grep command you just ran, ignore that one).
You can check if a coalesce job is currently active by running `ps axf | grep vhd` on the XCP-ng/XenServer host and looking for a VHD process in the results (one of the resulting processes will be the grep command you just ran, ignore that one).
If you don't see any running coalesce jobs, and can't find any other reason that XenServer has not started one, you can attempt to make it start a coalesce job by rescanning the SR. This is harmless to try, but will not always result in a coalesce. Visit the problematic SR in the XOA UI, then click the "Rescan All Disks" button towards the top right: it looks like a refresh circle icon. This should begin the coalesce process - if you click the Advanced tab in the SR view, the "disks needing to be coalesced" list should become smaller and smaller.
If you don't see any running coalesce jobs, and can't find any other reason that XCP-ng/XenServer has not started one, you can attempt to make it start a coalesce job by rescanning the SR. This is harmless to try, but will not always result in a coalesce. Visit the problematic SR in the XOA UI, then click the "Rescan All Disks" button towards the top right: it looks like a refresh circle icon. This should begin the coalesce process - if you click the Advanced tab in the SR view, the "disks needing to be coalesced" list should become smaller and smaller.
As a last resort, migrating the VM (more specifically, its disks) to a new storage repository will also force a coalesce and solve this issue. That means migrating a VM to another host (with its own storage) and back will force the VDI chain for that VM to be coalesced, and get rid of the `VDI Chain Protection` message.

View File

@@ -192,9 +192,9 @@ Any Debian Linux mount point could be supported this way, until we add further o
All your scheduled backups are acccessible in the "Restore" view in the backup section of Xen Orchestra.
1. Select your remote and click on the eye icon to see the VMs available
1. Search the VM Name and click on the blue button with a white arrow
2. Choose the backup you want to restore
3. Select the SR where you want to restore it
3. Select the SR where you want to restore it and click "OK"
:::tip
You can restore your backup even on a brand new host/pool and on brand new hardware.
@@ -311,7 +311,7 @@ The first purely sequential strategy will lead to the fact that: **you can't pre
If you need your backup to be done at a specific time you should consider creating a specific backup task for this VM.
:::
Strategy number 2 is to parallelise: all the snapshots will be taken at 3 AM. However **it's risky without limits**: it means potentially doing 50 snapshots or more at once on the same storage. **Since XenServer doesn't have a queue**, it will try to do all of them at once. This is also prone to race conditions and could cause crashes on your storage.
Strategy number 2 is to parallelise: all the snapshots will be taken at 3 AM. However **it's risky without limits**: it means potentially doing 50 snapshots or more at once on the same storage. **Since XCP-ng/XenServer doesn't have a queue**, it will try to do all of them at once. This is also prone to race conditions and could cause crashes on your storage.
By default the _parallel strategy_ is, on paper, the most logical one. But you need to be careful and give it some limits on concurrency.

View File

@@ -118,6 +118,22 @@ On XOA, the log file for XO-server is in `/var/log/syslog`. It contains all the
If you don't want to have Xen Orchestra exposed directly outside, or just integrating it with your existing infrastructure, you can use a Reverse Proxy.
First of all you need to allow Xen Orchestra to use `X-Forwarded-*` headers to determine the IP addresses of clients:
```toml
[http]
# Accepted values for this setting:
# - false (default): do not use the headers
# - true: always use the headers
# - a list of trusted addresses: the headers will be used only if the connection
# is coming from one of these addresses
#
# More info about the accepted values: https://www.npmjs.com/package/proxy-addr?activeTab=readme#proxyaddrreq-trust
#
# > Note: X-Forwarded-* headers are easily spoofed and the detected IP addresses are unreliable.
useForwardedHeaders = ['127.0.0.1']
```
### Apache
As `xo-web` and `xo-server` communicate with _WebSockets_, you need to have the [`mod_proxy`](http://httpd.apache.org/docs/2.4/mod/mod_proxy.html), [`mod_proxy_http`](http://httpd.apache.org/docs/2.4/mod/mod_proxy_http.html), [`mod_proxy_wstunnel`](http://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html) and [`mod_rewrite`](http://httpd.apache.org/docs/2.4/mod/mod_rewrite.html) modules enabled.

View File

@@ -12,7 +12,7 @@ If you don't have any servers connected, you'll see a panel telling you to add a
### Add a host
Just click on "Add server", enter the IP of a XenServer host (ideally the pool master if in a pool):
Just click on "Add server", enter the IP of a XCP-ng/XenServer host (ideally the pool master if in a pool):
![](./assets/xo5addserver.png)
@@ -69,12 +69,12 @@ All your pools are displayed here:
You can also see missing patches in red.
:::tip
Did you know? Even a single XenServer host is inside a pool!
Did you know? Even a single XCP-ng/XenServer host is inside a pool!
:::
## Live filter search
The idea is not just to provide a good search engine, but also a complete solution for managing all your XenServer infrastructure. Ideally:
The idea is not just to provide a good search engine, but also a complete solution for managing all your XCP-ng/XenServer infrastructure. Ideally:
- less clicks to see or do what you need
- find a subset of interesting objects
@@ -238,7 +238,7 @@ The next step is to select a template:
![](./assets/xo5createwithtemplate.png)
:::tip
What is a XenServer template? It can be 2 things: first an "empty" template, meaning it contains only the configuration for your future VM, such as example settings (minimum disk size, RAM and CPU, BIOS settings if HVM etc.) Or it could be a previous VM you converted into a template: in this case, creating a VM will clone the existing disks.
What is a XCP-ng/XenServer template? It can be 2 things: first an "empty" template, meaning it contains only the configuration for your future VM, such as example settings (minimum disk size, RAM and CPU, BIOS settings if HVM etc.) Or it could be a previous VM you converted into a template: in this case, creating a VM will clone the existing disks.
:::
##### Name and description
@@ -289,7 +289,7 @@ Please refer to the [XCP-ng CloudInit section](advanced.md#cloud-init) for more.
#### Interfaces
This is the network section of the VM configuration: in general, MAC field is kept empty (autogenerated from XenServer). We also select the management network by default, but you can change it to reflect your own network configuration.
This is the network section of the VM configuration: in general, MAC field is kept empty (autogenerated from XCP-ng/XenServer). We also select the management network by default, but you can change it to reflect your own network configuration.
#### Disks
@@ -331,7 +331,7 @@ To do so: Access the Xen Orchestra page for your running VM, then enter the Disk
#### Offline VDI migration
Even though it's not currently supported in XenServer, we can do it in Xen Orchestra. It's exactly the same process as a running VM.
Even though it's not currently supported in XCP-ng/XenServer, we can do it in Xen Orchestra. It's exactly the same process as a running VM.
### VM recovery
@@ -347,7 +347,7 @@ Activating "Auto Power on" for a VM will also configure the pool accordingly. If
### VM high availability (HA)
If you pool supports HA (must have shared storage), you can activate "HA". Read our blog post for more details on [VM high availability with XenServer](https://xen-orchestra.com/blog/xenserver-and-vm-high-availability/).
If you pool supports HA (must have shared storage), you can activate "HA". Read our blog post for more details on [VM high availability with XCP-ng/XenServer](https://xen-orchestra.com/blog/xenserver-and-vm-high-availability/).
#### Docker management
@@ -371,7 +371,7 @@ If one VM has for example, "Double", it will have double the priority on the Xen
### VM Copy
VM copy allows you to make an export and an import via streaming. You can target any SR in your whole XenServer infrastructure (even across different pools!)
VM copy allows you to make an export and an import via streaming. You can target any SR in your whole XCP-ng/XenServer infrastructure (even across different pools!)
### Snapshot management
@@ -387,7 +387,7 @@ By default, XOA will try to make a snapshot with quiesce. If the VM does not sup
## VM import and export
Xen Orchestra can import and export VM's in XVA format (XenServer format) or import OVA files (OVF1 format).
Xen Orchestra can import and export VM's in XVA format (XCP-ng/XenServer format) or import OVA files (OVF1 format).
:::tip
We support OVA import from VirtualBox. Feel free to report issues with OVA from other virtualization platforms.
@@ -590,7 +590,7 @@ To remove one host from a pool, you can go to the "Advanced" tab of the host pag
## Visualizations
Visualizations can help you to understand your XenServer infrastructure, as well as correlate events and detect bottlenecks.
Visualizations can help you to understand your XCP-ng/XenServer infrastructure, as well as correlate events and detect bottlenecks.
:::tip
:construction_worker: This section needs to be completed: screenshots and how-to :construction_worker:
@@ -608,7 +608,7 @@ You can also update all your hosts (install missing patches) from this page.
### Parallel Coordinates
A Parallel Coordinates visualization helps to detect proportions in a hierarchical environment. In a XenServer environment, it's especially useful if you want to see useful information from a large amount of data.
A Parallel Coordinates visualization helps to detect proportions in a hierarchical environment. In a XCP-ng/XenServer environment, it's especially useful if you want to see useful information from a large amount of data.
![](./assets/parralelcoordinates.png)
@@ -687,7 +687,7 @@ This allows you to enjoy Docker containers displayed directly in Xen Orchestra.
### Docker plugin installation
This first step is needed until Docker is supported natively in the XenServer API (XAPI).
This first step is needed until Docker is supported natively in the XCP-ng/XenServer API (XAPI).
:::tip
The plugin should be installed on every host you will be using, even if they are on the same pool.

View File

@@ -6,7 +6,7 @@ Xen Orchestra is an Open Source project created by [Olivier Lambert](https://www
The idea of Xen Orchestra was origally born in 2009, see the original announcement on [Xen User mailing list](https://lists.xenproject.org/archives/html/xen-users/2009-09/msg00537.html). It worked on Xen and `xend` (now deprecated).
## XO reboot for XenServer/XCP
## XO reboot for XCP-ng/XenServer
Project was rebooted in the end of 2012, and "pushed" thanks to Lars Kurth. It's also a commercial project since 2016, and now with a team of 6 people dedicated fulltime.

View File

@@ -121,6 +121,16 @@ Content-Type: application/x-ndjson
{"name_label":"Debian 10 Cloudinit self-service","power_state":"Halted","url":"/rest/v0/vms/5019156b-f40d-bc57-835b-4a259b177be1"}
```
## Task monitoring
When fetching a task record, the special `wait` query string can be used. If its value is `result` it will wait for the task to be resolved (either success or failure) before returning, otherwise it will wait for the next change of state.
```sh
curl \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
'https://xo.example.org/rest/v0/tasks/0lr4zljbe?wait=result'
```
## Properties update
> This feature is restricted to `name_label`, `name_description` and `tags` at the moment.
@@ -302,9 +312,7 @@ curl \
'https://xo.example.org/rest/v0/vms/770aa52a-fd42-8faf-f167-8c5c4a237cac/actions/snapshot'
```
By default, actions are asynchronous and return the reference of the task associated with the request.
> Tasks monitoring is still under construcration and will come in a future release :)
By default, actions are asynchronous and return the reference of the task associated with the request (see [_Task monitoring_](#task-monitoring)).
The `?sync` flag can be used to run the action synchronously without requiring task monitoring. The result of the action will be returned encoded as JSON:

View File

@@ -14,7 +14,7 @@ It means you don't have a default SR set on the pool you are importing XOA on. T
## Unreachable after boot
XOA uses HVM mode. If your physical host doesn't support virtualization extensions, XOA won't work. To check if your XenServer supports hardware assisted virtualization (HVM), you can enter this command in your host: `grep --color vmx /proc/cpuinfo`. If you don't have any result, it means XOA won't work on this hardware.
XOA uses HVM mode. If your physical host doesn't support virtualization extensions, XOA won't work. To check if your XCP-ng/XenServer supports hardware assisted virtualization (HVM), you can enter this command in your host: `grep --color vmx /proc/cpuinfo`. If you don't have any result, it means XOA won't work on this hardware.
## Set or recover XOA VM password
@@ -32,7 +32,9 @@ Then you need to restart the VM.
If you have lost your password to log in to the XOA webpage, you can reset it. From the XOA CLI (for login/access info for the CLI, [see here](xoa.md#first-console-connection)), use the following command and insert the email/account you wish to recover:
`xo-server-recover-account youremail@here.com`
```sh
sudo xo-server-recover-account youremail@here.com
```
It will prompt you to set a new password. If you provide an email here that does not exist in XOA yet, it will create a new account using it, with admin permissions - you can use that new account to log in as well.
@@ -195,7 +197,7 @@ If you have ghost tasks accumulating in your Xen Orchestra you can try the follo
1. refresh the web page
1. disconnect and reconnect the Xen pool/server owning the tasks
1. restart the XenAPI Toolstack of the XenServer master
1. restart the XenAPI Toolstack of the XCP-ng/XenServer master
1. restart xo-server
### Redownload and rebuild

View File

@@ -255,7 +255,7 @@ To create a new set of resources to delegate, go to the "Self Service" section i
Only an admin can create a set of resources
:::
To allow people to create VMs as they want, we need to give them a _part_ of your XenServer resources (disk space, CPUs, RAM). You can call this "general quotas" if you like. But you first need to decide which resources will be used.
To allow people to create VMs as they want, we need to give them a _part_ of your XCP-ng/XenServer resources (disk space, CPUs, RAM). You can call this "general quotas" if you like. But you first need to decide which resources will be used.
In this example below, we'll create a set called **"sandbox"** with:

View File

@@ -58,18 +58,18 @@ Please only use this if you have issues with [the default way to deploy XOA](ins
### Via a bash script
Alternatively, you can deploy it by connecting to your XenServer host and executing the following:
Alternatively, you can deploy it by connecting to your XCP-ng/XenServer host and executing the following:
```sh
bash -c "$(wget -qO- https://xoa.io/deploy)"
```
:::tip
This won't write or modify anything on your XenServer host: it will just import the XOA VM into your default storage repository.
This won't write or modify anything on your XCP-ng/XenServer host: it will just import the XOA VM into your default storage repository.
:::
:::warning
If you are using an old XenServer version, you may get a `curl` error:
If you are using an old XCP-ng/XenServer version, you may get a `curl` error:
```
curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version

View File

@@ -39,7 +39,7 @@ In order to work, XOSAN need a minimal set of requirements.
### Storage
XOSAN can be deployed on an existing **Local LVM storage**, that XenServer configure by default during its installation. You need 10GiB for XOSAN VM (one on each host) and the rest for XOSAN data, eg all the space left.
XOSAN can be deployed on an existing **Local LVM storage**, that XCP-ng/XenServer configure by default during its installation. You need 10GiB for XOSAN VM (one on each host) and the rest for XOSAN data, eg all the space left.
However, if you have unused disks on your host, you can also create yourself a local LVM storage while using Xen Orchestra:
@@ -47,7 +47,7 @@ However, if you have unused disks on your host, you can also create yourself a l
- Select the host having the disk you want to use for XOSAN
- Select "Local LVM" and enter the path of this disk (e.g: `/dev/sdf`)
> You can discover disks names by issuing `fdisk -l` command on your XenServer host.
> You can discover disks names by issuing `fdisk -l` command on your XCP-ng/XenServer host.
> **Recommended hardware:** we don't have specific hardware recommendation regarding hard disks. It could be directly a disk or even a disk exposed via a hardware RAID. Note that RAID mode will influence global speed of XOSAN.
@@ -183,13 +183,13 @@ It's very similar to **RAID 10**. In this example, you'll have 300GiB of data us
#### Examples
Here is some examples depending of the number of XenServer hosts.
Here is some examples depending of the number of XCP-ng/XenServer hosts.
##### 2 hosts
This is a kind of special mode. On a 2 nodes setup, one node must know what's happening if it can't contact the other node. This is called a **split brain** scenario. To avoid data loss, it goes on read only. But there is a way to overcome this, with a special node, called **the arbiter**. It will only require an extra VM using only few disk space.
Thanks to this arbiter, you'll have 3 nodes running on 2 XenServer hosts:
Thanks to this arbiter, you'll have 3 nodes running on 2 XCP-ng/XenServer hosts:
- if the host with 1 node is down, the other host will continue to provide a working XOSAN
- if the host with 2 nodes (1 normal and 1 arbiter) id down, the other node will go into read only mode, to avoid split brain scenario.
@@ -312,13 +312,13 @@ Once you are ready, you can click on `Create`. XOSAN will automatically deploy i
## Try it!
XOSAN is a 100% software defined solution for XenServer hyperconvergence. You can unlock a free 50GiB cluster to test the solution in your infrastructure and discover all the benefits you can get by using XOSAN.
XOSAN is a 100% software defined solution for XCP-ng/XenServer hyperconvergence. You can unlock a free 50GiB cluster to test the solution in your infrastructure and discover all the benefits you can get by using XOSAN.
### Step 1
You will need to be registered on our website in order to use Xen Orchestra. If you are not yet registered, [here is the way](https://xen-orchestra.com/#!/signup)
SSH in your XenServer and use the command line `bash -c "$(wget -qO- https://xoa.io/deploy)"` - it will deploy Xen Orchestra Appliance on your XenServer infrastructure which is required to use XOSAN.
SSH in your XCP-ng/XenServer and use the command line `bash -c "$(wget -qO- https://xoa.io/deploy)"` - it will deploy Xen Orchestra Appliance on your XCP-ng/XenServer infrastructure which is required to use XOSAN.
> Note: You can also download the XVA file and follow [these instructions](https://xen-orchestra.com/docs/xoa.html#the-alternative).

View File

@@ -31,7 +31,7 @@
"lodash": "^4.17.21",
"promise-toolbox": "^0.21.0",
"uuid": "^9.0.0",
"vhd-lib": "^4.8.0"
"vhd-lib": "^4.9.0"
},
"scripts": {
"postversion": "npm publish",

View File

@@ -45,10 +45,12 @@ exports.createNbdVhdStream = async function createVhdStream(
const bufFooter = await readChunkStrict(sourceStream, FOOTER_SIZE)
const header = unpackHeader(await readChunkStrict(sourceStream, HEADER_SIZE))
header.tableOffset = FOOTER_SIZE + HEADER_SIZE
// compute BAT in order
const batSize = Math.ceil((header.maxTableEntries * 4) / SECTOR_SIZE) * SECTOR_SIZE
// skip space between header and beginning of the table
await skipStrict(sourceStream, header.tableOffset - (FOOTER_SIZE + HEADER_SIZE))
// new table offset
header.tableOffset = FOOTER_SIZE + HEADER_SIZE
const streamBat = await readChunkStrict(sourceStream, batSize)
let offset = FOOTER_SIZE + HEADER_SIZE + batSize
// check if parentlocator are ordered

View File

@@ -306,7 +306,7 @@ class Merger {
const finalVhdSize = this.#state?.vhdSize ?? 0
const mergedDataSize = this.#state?.mergedDataSize ?? 0
await this.#handler.unlink(this.#statePath).catch(warn)
return { mergedDataSize, finalVhdSize}
return { mergedDataSize, finalVhdSize }
}
}

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "vhd-lib",
"version": "4.8.0",
"version": "4.9.0",
"license": "AGPL-3.0-or-later",
"description": "Primitives for VHD file handling",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/packages/vhd-lib",

View File

@@ -10,6 +10,6 @@
"readable-stream": "^4.4.2",
"source-map-support": "^0.5.21",
"throttle": "^1.0.3",
"vhd-lib": "^4.8.0"
"vhd-lib": "^4.9.0"
}
}

View File

@@ -361,7 +361,16 @@ export class Xapi extends EventEmitter {
if (value === null) {
return this.call(`${type}.remove_from_${field}`, ref, entry).then(noop)
}
while (true) {
// First, remove any previous value to avoid triggering an unnecessary
// `MAP_DUPLICATE_KEY` error which will appear in the XAPI logs
//
// This is safe because this method does not throw if the entry is missing.
//
// See https://xcp-ng.org/forum/post/68761
await this.call(`${type}.remove_from_${field}`, ref, entry)
try {
await this.call(`${type}.add_to_${field}`, ref, entry, value)
return
@@ -370,7 +379,6 @@ export class Xapi extends EventEmitter {
throw error
}
}
await this.call(`${type}.remove_from_${field}`, ref, entry)
}
}

View File

@@ -44,7 +44,12 @@ async function connect() {
const xo = new Xo({ rejectUnauthorized: !allowUnauthorized, url: server })
await xo.open()
await xo.signIn({ token })
try {
await xo.signIn({ token })
} catch (error) {
await xo.close()
throw error
}
return xo
}
@@ -157,32 +162,33 @@ function extractFlags(args) {
const noop = Function.prototype
function parseValue(value) {
if (value.startsWith('json:')) {
return JSON.parse(value.slice(5))
}
if (value === 'true') {
return true
}
if (value === 'false') {
return false
}
return value
}
const PARAM_RE = /^([^=]+)=([^]*)$/
function parseParameters(args) {
if (args[0] === '--') {
return args.slice(1).map(parseValue)
}
const params = {}
forEach(args, function (arg) {
let matches
if (!(matches = arg.match(PARAM_RE))) {
throw new Error('invalid arg: ' + arg)
}
params[matches[1]] = parseValue(matches[2])
const name = matches[1]
let value = matches[2]
if (value.startsWith('json:')) {
value = JSON.parse(value.slice(5))
}
if (name === '@') {
params['@'] = value
return
}
if (value === 'true') {
value = true
} else if (value === 'false') {
value = false
}
params[name] = value
})
return params

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "xo-cli",
"version": "0.22.0",
"version": "0.24.0",
"license": "AGPL-3.0-or-later",
"description": "Basic CLI for Xen-Orchestra",
"keywords": [
@@ -29,6 +29,7 @@
"node": ">=15.3"
},
"dependencies": {
"@vates/read-chunk": "^1.2.0",
"chalk": "^5.0.1",
"content-type": "^1.0.5",
"fs-extra": "^11.1.0",
@@ -41,6 +42,7 @@
"progress-stream": "^2.0.0",
"promise-toolbox": "^0.21.0",
"pw": "^0.0.4",
"split2": "^4.2.0",
"xdg-basedir": "^5.1.0",
"xo-lib": "^0.11.1"
},

View File

@@ -2,10 +2,13 @@ import { basename, join } from 'node:path'
import { createWriteStream } from 'node:fs'
import { normalize } from 'node:path/posix'
import { parse as parseContentType } from 'content-type'
import { pipeline } from 'node:stream/promises'
import { pipeline } from 'node:stream'
import { pipeline as pPipeline } from 'node:stream/promises'
import { readChunk } from '@vates/read-chunk'
import getopts from 'getopts'
import hrp from 'http-request-plus'
import merge from 'lodash/merge.js'
import split2 from 'split2'
import * as config from './config.mjs'
@@ -19,6 +22,8 @@ function addPrefix(suffix) {
return path
}
const noop = Function.prototype
function parseParams(args) {
const params = {}
for (const arg of args) {
@@ -60,7 +65,7 @@ const COMMANDS = {
const response = await this.exec(path, { query: parseParams(rest) })
if (output !== '') {
return pipeline(
return pPipeline(
response,
output === '-'
? process.stdout
@@ -84,6 +89,13 @@ const COMMANDS = {
}
return this.json ? JSON.stringify(result, null, 2) : result
} else if (type === 'application/x-ndjson') {
const lines = pipeline(response, split2(), noop)
let line
while ((line = await readChunk(lines)) !== null) {
const data = JSON.parse(line)
console.log(this.json ? JSON.stringify(data, null, 2) : data)
}
} else {
throw new Error('unsupported content-type ' + type)
}
@@ -134,6 +146,12 @@ export async function rest(args) {
const { allowUnauthorized, server, token } = await config.load()
if (server === undefined) {
const errorMessage =
'Please use `xo-cli --register` to associate with an XO instance first.\n\nSee `xo-cli --help` for more info.'
throw errorMessage
}
const baseUrl = server
const baseOpts = {
headers: {

View File

@@ -178,7 +178,7 @@ export default class PerformancePlan extends Plan {
const state = this._getThresholdState(exceededAverages)
if (
destinationAverages.cpu + vmAverages.cpu >= this._thresholds.cpu.low ||
destinationAverages.memoryFree - vmAverages.memory <= this._thresholds.cpu.high ||
destinationAverages.memoryFree - vmAverages.memory <= this._thresholds.memory.high ||
(!state.cpu &&
!state.memory &&
(exceededAverages.cpu - vmAverages.cpu < destinationAverages.cpu + vmAverages.cpu ||

View File

@@ -2,12 +2,17 @@
- [Authentication](#authentication)
- [Collections](#collections)
- [Task monitoring](#task-monitoring)
- [Properties update](#properties-update)
- [Collections](#collections-1)
- [VM destruction](#vm-destruction)
- [VM Export](#vm-export)
- [VM Import](#vm-import)
- [VDI destruction](#vdi-destruction)
- [VDI Export](#vdi-export)
- [VDI Import](#vdi-import)
- [Existing VDI](#existing-vdi)
- [New VDI](#new-vdi)
- [Actions](#actions)
- [Available actions](#available-actions)
- [Start an action](#start-an-action)
@@ -117,6 +122,16 @@ Content-Type: application/x-ndjson
{"name_label":"Debian 10 Cloudinit self-service","power_state":"Halted","url":"/rest/v0/vms/5019156b-f40d-bc57-835b-4a259b177be1"}
```
## Task monitoring
When fetching a task record, the special `wait` query string can be used. If its value is `result` it will wait for the task to be resolved (either success or failure) before returning, otherwise it will wait for the next change of state.
```sh
curl \
-b authenticationToken=KQxQdm2vMiv7jBIK0hgkmgxKzemd8wSJ7ugFGKFkTbs \
'https://xo.company.lan/rest/v0/tasks/0lr4zljbe?wait=result'
```
## Properties update
> This feature is restricted to `name_label`, `name_description` and `tags` at the moment.
@@ -303,9 +318,7 @@ curl \
'https://xo.company.lan/rest/v0/vms/770aa52a-fd42-8faf-f167-8c5c4a237cac/actions/snapshot'
```
By default, actions are asynchronous and return the reference of the task associated with the request.
> Tasks monitoring is still under construcration and will come in a future release :)
By default, actions are asynchronous and return the reference of the task associated with the request (see [_Task monitoring_](#task-monitoring)).
The `?sync` flag can be used to run the action synchronously without requiring task monitoring. The result of the action will be returned encoded as JSON:

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "xo-server",
"version": "5.130.0",
"version": "5.132.0",
"license": "AGPL-3.0-or-later",
"description": "Server part of Xen-Orchestra",
"keywords": [
@@ -40,6 +40,7 @@
"@vates/parse-duration": "^0.1.1",
"@vates/predicates": "^1.1.0",
"@vates/read-chunk": "^1.2.0",
"@vates/task": "^0.2.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.44.3",
"@xen-orchestra/cron": "^1.0.6",
@@ -128,7 +129,7 @@
"unzipper": "^0.10.5",
"uuid": "^9.0.0",
"value-matcher": "^0.2.0",
"vhd-lib": "^4.8.0",
"vhd-lib": "^4.9.0",
"ws": "^8.2.3",
"xdg-basedir": "^5.1.0",
"xen-api": "^2.0.0",

View File

@@ -33,7 +33,6 @@ html
i.fa.fa-sign-in
| Sign in
else
div.mb-2
each label, id in strategies
div: a(href = 'signin/' + id).btn.btn-block.btn-primary.mb-1 Sign in with #{label}
form(action = 'signin/local' method = 'post')

View File

@@ -2,6 +2,7 @@
import filter from 'lodash/filter.js'
import find from 'lodash/find.js'
import { Task } from '@xen-orchestra/mixins/Tasks.mjs'
import { IPV4_CONFIG_MODES, IPV6_CONFIG_MODES } from '../xapi/index.mjs'
@@ -73,12 +74,34 @@ connect.resolve = {
// ===================================================================
// Reconfigure IP
export async function reconfigureIp({ pif, mode = 'DHCP', ip = '', netmask = '', gateway = '', dns = '' }) {
const xapi = this.getXapi(pif)
await xapi.call('PIF.reconfigure_ip', pif._xapiRef, mode, ip, netmask, gateway, dns)
if (pif.management) {
await xapi.call('host.management_reconfigure', pif._xapiRef)
}
export async function reconfigureIp({ pif, mode, ip = '', netmask = '', gateway = '', dns = '', ipv6, ipv6Mode }) {
const task = this.tasks.create({
name: `reconfigure ip of: ${pif.device}`,
objectId: pif.uuid,
type: 'xo:pif:reconfigureIp',
})
await task.run(async () => {
const xapi = this.getXapi(pif)
if ((ipv6 !== '' && pif.ipv6?.[0] !== ipv6) || (ipv6Mode !== undefined && ipv6Mode !== pif.ipv6Mode)) {
await Task.run(
{ properties: { name: 'reconfigure IPv6', mode: ipv6Mode, ipv6, gateway, dns, objectId: pif.uuid } },
() => xapi.call('PIF.reconfigure_ipv6', pif._xapiRef, ipv6Mode, ipv6, gateway, dns)
)
}
if (mode !== undefined && mode !== pif.mode) {
await Task.run(
{ properties: { name: 'reconfigure IPv4', mode, ip, netmask, gateway, dns, objectId: pif.uuid } },
() => xapi.call('PIF.reconfigure_ip', pif._xapiRef, mode, ip, netmask, gateway, dns)
)
}
if (pif.management) {
await Task.run({ properties: { name: 'reconfigure PIF management', objectId: pif.uuid } }, () =>
xapi.call('host.management_reconfigure', pif._xapiRef)
)
}
})
}
reconfigureIp.params = {
@@ -88,6 +111,8 @@ reconfigureIp.params = {
netmask: { type: 'string', minLength: 0, optional: true },
gateway: { type: 'string', minLength: 0, optional: true },
dns: { type: 'string', minLength: 0, optional: true },
ipv6: { type: 'string', minLength: 0, default: '' },
ipv6Mode: { enum: getIpv6ConfigurationModes(), optional: true },
}
reconfigureIp.resolve = {

View File

@@ -774,6 +774,29 @@ set.resolve = {
// -------------------------------------------------------------------
export const setAndRestart = defer(async function ($defer, params) {
const vm = params.VM
const force = extract(params, 'force')
await stop.bind(this)({ vm, force })
$defer(start.bind(this), { vm, force })
return set.bind(this)(params)
})
setAndRestart.params = {
// Restart options
force: { type: 'boolean', optional: true },
// Set params
...set.params,
}
setAndRestart.resolve = set.resolve
// -------------------------------------------------------------------
export const restart = defer(async function ($defer, { vm, force = false, bypassBlockedOperation = force }) {
const xapi = this.getXapi(vm)
if (bypassBlockedOperation) {

View File

@@ -965,7 +965,9 @@ async function _importGlusterVM(xapi, template, lvmsrId) {
namespace: 'xosan',
version: template.version,
})
const newVM = await xapi.VM_import(templateStream, this.getObject(lvmsrId, 'SR')._xapiRef)
const newVM = await xapi._getOrWaitObject(
await xapi.VM_import(templateStream, this.getObject(lvmsrId, 'SR')._xapiRef)
)
await xapi.editVm(newVM, {
autoPoweron: true,
name_label: 'XOSAN imported VM',

View File

@@ -116,8 +116,7 @@ export default class Redis extends Collection {
return Promise.all(
map(ids, id => {
return this.#get(prefix + id).then(model => {
// If empty, consider it a no match.
if (isEmpty(model)) {
if (model === undefined) {
return
}
@@ -197,12 +196,21 @@ export default class Redis extends Collection {
)
}
/**
* Fetches the record in the database
*
* Returns undefined if not present.
*/
async #get(key) {
const { redis } = this
let model
try {
model = await redis.get(key).then(JSON.parse)
const json = await redis.get(key)
if (json !== null) {
model = JSON.parse(json)
}
} catch (error) {
if (!error.message.startsWith('WRONGTYPE')) {
throw error

View File

@@ -562,15 +562,18 @@ const TRANSFORMS = {
disallowUnplug: Boolean(obj.disallow_unplug),
gateway: obj.gateway,
ip: obj.IP,
ipv6: obj.IPv6,
mac: obj.MAC,
management: Boolean(obj.management), // TODO: find a better name.
carrier: Boolean(metrics && metrics.carrier),
mode: obj.ip_configuration_mode,
ipv6Mode: obj.ipv6_configuration_mode,
mtu: +obj.MTU,
netmask: obj.netmask,
// A non physical PIF is a "copy" of an existing physical PIF (same device)
// A physical PIF cannot be unplugged
physical: Boolean(obj.physical),
primaryAddressType: obj.primary_address_type,
vlan: +obj.VLAN,
speed: metrics && +metrics.speed,
$host: link(obj, 'host'),

View File

@@ -165,14 +165,16 @@ export default class Xapi extends XapiBase {
async emergencyShutdownHost(hostId) {
const host = this.getObject(hostId)
const vms = host.$resident_VMs
log.debug(`Emergency shutdown: ${host.name_label}`)
await asyncMap(vms, vm => {
await this.call('host.disable', host.$ref)
await asyncMap(host.$resident_VMs, vm => {
if (!vm.is_control_domain) {
return ignoreErrors.call(this.callAsync('VM.suspend', vm.$ref))
}
})
await this.call('host.disable', host.$ref)
await this.callAsync('host.shutdown', host.$ref)
}

View File

@@ -251,19 +251,9 @@ export default class Api {
constructor(app) {
this._logger = null
this._methods = { __proto__: null }
this._app = app
const defer =
const seq = async methods => {
for (const method of methods) {
await this.#callApiMethod(method[0], method[1])
}
}
seq.validate = ajv.compile({ type: 'array', minLength: 1, items: { type: ['array', 'string'] } })
const if =
this._methods = { __proto__: null, seq }
this.addApiMethods(methods)
app.hooks.on('start', async () => {
this._logger = await app.getLogger('api')
@@ -377,7 +367,8 @@ export default class Api {
}
async callApiMethod(connection, name, params = {}) {
if (!Object.hasOwn(this._methods, name)) {
const method = this._methods[name]
if (!method) {
throw new MethodNotFound(name)
}
@@ -392,12 +383,11 @@ export default class Api {
apiContext.permission = 'none'
}
return this.#apiContext.run(apiContext, () => this.#callApiMethod(name, params))
return this.#apiContext.run(apiContext, () => this.#callApiMethod(name, method, params))
}
async #callApiMethod(name, params) {
async #callApiMethod(name, method, params) {
const app = this._app
const method = this._methods[name]
const startTime = Date.now()
const { connection, user } = this.apiContext

View File

@@ -31,6 +31,7 @@ const AUTHORIZATIONS = {
XVA: STARTER, // @todo handleExport in xen-orchestra/packages/xo-server/src/api/vm.mjs
},
LIST_MISSING_PATCHES: STARTER,
POOL_EMERGENCY_SHUTDOWN: ENTERPRISE,
ROLLING_POOL_UPDATE: ENTERPRISE,
}

View File

@@ -5,7 +5,7 @@ import { createLogger } from '@xen-orchestra/log'
import { createRunner } from '@xen-orchestra/backups/Backup.mjs'
import { parseMetadataBackupId } from '@xen-orchestra/backups/parseMetadataBackupId.mjs'
import { RestoreMetadataBackup } from '@xen-orchestra/backups/RestoreMetadataBackup.mjs'
import { Task } from '@xen-orchestra/backups/Task.mjs'
import { Task } from '@vates/task'
import { debounceWithKey, REMOVE_CACHE_ENTRY } from '../_pDebounceWithKey.mjs'
import { handleBackupLog } from '../_handleBackupLog.mjs'
@@ -124,8 +124,8 @@ export default class metadataBackup {
const localTaskIds = { __proto__: null }
return Task.run(
{
name: 'backup run',
onLog: log =>
properties: { name: 'backup run' },
onProgress: log =>
handleBackupLog(log, {
localTaskIds,
logger,

View File

@@ -271,13 +271,15 @@ export default class Proxy {
[namespace]: { xva },
} = await app.getResourceCatalog()
const xapi = app.getXapi(srId)
const vm = await xapi.VM_import(
await app.requestResource({
id: xva.id,
namespace,
version: xva.version,
}),
srId && this.getObject(srId, 'SR')._xapiRef
const vm = await xapi.getOrWaitObject(
await xapi.VM_import(
await app.requestResource({
id: xva.id,
namespace,
version: xva.version,
}),
srId && app.getObject(srId, 'SR')._xapiRef
)
)
$defer.onFailure(() => xapi.VM_destroy(vm.$ref))

View File

@@ -230,6 +230,11 @@ export default class RestApi {
collections.pools.actions = {
__proto__: null,
emergency_shutdown: async ({ xapiObject }) => {
await app.checkFeatureAuthorization('POOL_EMERGENCY_SHUTDOWN')
await xapiObject.$xapi.pool_emergencyShutdown()
},
rolling_update: async ({ xoObject }) => {
await app.checkFeatureAuthorization('ROLLING_POOL_UPDATE')

View File

@@ -26,7 +26,7 @@
"pako": "^2.0.4",
"promise-toolbox": "^0.21.0",
"tar-stream": "^3.1.6",
"vhd-lib": "^4.8.0",
"vhd-lib": "^4.9.0",
"xml2js": "^0.4.23"
},
"devDependencies": {

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "xo-web",
"version": "5.130.1",
"version": "5.133.0",
"license": "AGPL-3.0-or-later",
"description": "Web interface client for Xen-Orchestra",
"keywords": [
@@ -120,6 +120,7 @@
"readable-stream": "^3.0.2",
"redux": "^4.0.0",
"redux-thunk": "^2.0.1",
"relative-luminance": "^2.0.1",
"reselect": "^2.5.4",
"rimraf": "^5.0.1",
"sass": "^1.38.1",

View File

@@ -1541,9 +1541,6 @@ export default {
// Original text: 'Invalid parameters'
configIpErrorTitle: undefined,
// Original text: 'IP address and netmask required'
configIpErrorMessage: undefined,
// Original text: 'Static IP address'
staticIp: undefined,

View File

@@ -1596,9 +1596,6 @@ export default {
// Original text: "Invalid parameters"
configIpErrorTitle: 'Paramètres invalides',
// Original text: "IP address and netmask required"
configIpErrorMessage: 'Adresse IP et masque de réseau requis',
// Original text: "Static IP address"
staticIp: 'Adresse IP statique',

View File

@@ -1295,9 +1295,6 @@ export default {
// Original text: 'Invalid parameters'
configIpErrorTitle: undefined,
// Original text: 'IP address and netmask required'
configIpErrorMessage: undefined,
// Original text: 'Static IP address'
staticIp: undefined,

View File

@@ -1491,9 +1491,6 @@ export default {
// Original text: "Invalid parameters"
configIpErrorTitle: 'Invalid parameters',
// Original text: "IP address and netmask required"
configIpErrorMessage: 'IP cím and netmask required',
// Original text: "Static IP address"
staticIp: 'Static IP cím',

View File

@@ -2387,9 +2387,6 @@ export default {
// Original text: 'Invalid parameters'
configIpErrorTitle: 'Parametri non validi',
// Original text: 'IP address and netmask required'
configIpErrorMessage: 'Indirizzo IP e maschera di rete richiesti',
// Original text: 'Static IP address'
staticIp: 'Indirizzo IP statico',

View File

@@ -1298,9 +1298,6 @@ export default {
// Original text: 'Invalid parameters'
configIpErrorTitle: undefined,
// Original text: 'IP address and netmask required'
configIpErrorMessage: undefined,
// Original text: 'Static IP address'
staticIp: undefined,

Some files were not shown because too many files have changed in this diff Show More