Compare commits

..

79 Commits

Author SHA1 Message Date
Mohamedox
301f2f5d72 update changelog 2019-11-29 14:45:45 +01:00
Mohamedox
a3be8dc6fa fix(xo-web/host): netdata host url 2019-11-29 14:41:28 +01:00
Julien Fontanet
06f596adc6 feat(xo-web): 5.53.1 2019-11-29 13:53:42 +01:00
HamadaBrest
1f3b54e0c4 fix(xo-web/host): recheck telemetry state after install (#4686) 2019-11-29 13:52:27 +01:00
Rajaa.BARHTAOUI
2ddfbe8566 feat: technical release (#4685) 2019-11-28 16:58:30 +01:00
HamadaBrest
c61a118e4f feat(xo-web/host): advanced live telemetry (#4680) 2019-11-28 14:52:01 +01:00
Nicolas Raynaud
d69e61a634 feat(xo-web,xo-server): ability to import a VHD/VMDK disk (#4138) 2019-11-28 11:35:31 +01:00
Julien Fontanet
14f0cbaec6 fix(docs/from sources): no need to configure / mount point 2019-11-27 20:25:00 +01:00
Bill Gertz
b313eb14ee fix(docs/configuration): hosthostname (#4681)
The attribute `hostname` is incorrectly documented as `host`. Updated all occurrences of attribute `host` to `hostname`.
2019-11-26 14:38:33 +01:00
Julien Fontanet
7b47e40244 fix(docs/from sources): dont use .xo-server.toml 2019-11-26 14:12:10 +01:00
Julien Fontanet
b52204817d feat(xo-server): configurable sign in page (#4678)
See xoa-support#1940
2019-11-26 13:27:13 +01:00
Julien Fontanet
377552103e fix(xo-server/config): maxMergedDeltasPerRun position
Introduced by 688b65ccd
2019-11-26 11:55:25 +01:00
Julien Fontanet
688b65ccde feat(xo-server/backups-ng): limit number of gc-ed deltas (#4674)
Merging multiple VHDs is currently a slow process which could be optimized by doing a single merge of a synthetic delta.

Until this is implement, the number of garbage collected deltas should be limited to avoid taking too much time (and possibly interrupted jobs) in case of an important retention change.
2019-11-26 11:29:55 +01:00
badrAZ
6cb4faf33d feat(xo-web/log): add log schedule to bug report (#4676) 2019-11-26 11:23:24 +01:00
badrAZ
78b83bb901 feat(xo-server/metadata-backups): clear backup listing cache on change (#4672)
See 471f39741
2019-11-26 10:34:26 +01:00
badrAZ
9ff6f60b66 fix(xo-server/metadata-backups): add 10m timeout to avoid stuck jobs (#4666)
See #4657
2019-11-25 16:26:11 +01:00
Julien Fontanet
624e10ed15 feat(xo-server-auth-saml): disableRequestedAuthnContext (#4675)
Fixes xoa-support#1940
2019-11-25 15:05:48 +01:00
badrAZ
19e10bbb53 feat(backup): report recipients configurable in backup settings (#4646)
Fixes #4581
2019-11-25 14:49:17 +01:00
Julien Fontanet
cca945e05b fix(xo-server): remove overzealous changes
Bug introduced in 21901f2a7

As usual, thanks @Danp2 for your report :-)
2019-11-25 14:10:12 +01:00
Julien Fontanet
21901f2a75 chore: dont wrap unnecessary with fromCallback 2019-11-25 12:34:21 +01:00
Julien Fontanet
ef7f943eee chore(xo-server/index): use Array#forEach lodash.forOwn and ensureArray 2019-11-25 12:10:52 +01:00
Julien Fontanet
ec1062f9f2 chore(xo-server-auth-ldap): centralize default settings 2019-11-25 09:25:05 +01:00
Pierre Donias
2f67ed3138 fix(fs/Local#getInfo): remove NaN values (#4671)
Introduced by e34a0a6e33

NaNs were turned into nulls when sent to the client which was making
human-format throw a value must be a number error.
2019-11-22 17:16:23 +01:00
Rajaa.BARHTAOUI
ce912db30e feat(CHANGELOG): 5.40.2 (#4669) 2019-11-22 11:40:40 +01:00
Julien Fontanet
41d790f346 feat(xo-server): 5.52.1 2019-11-22 11:33:38 +01:00
Julien Fontanet
bf426e15ec fix(xo-server/proxies): fix WS error handling
Fixes #4670

Introduced by 56d63b10e4
2019-11-22 11:22:12 +01:00
badrAZ
e4403baeb9 fix(xo-server/backup-ng): fix pool metadata backups not debounced (#4668) 2019-11-22 10:04:13 +01:00
Julien Fontanet
61101b00a1 feat(xo-server-auth-ldap): add group filter example for AD 2019-11-20 21:37:22 +01:00
Julien Fontanet
69f8ffcfeb feat(xen-api/examples/import-vdi): guess VHD size if necessary 2019-11-20 12:12:12 +01:00
Rajaa.BARHTAOUI
6b8042291c feat: technical release (#4667) 2019-11-20 12:04:53 +01:00
Julien Fontanet
ffc0c83b50 fix: revert http-request-plus to 0.8
The new version breaks some features like VDI import.

This will be fixed this directly in `http-request-plus` but in the meantime 0.8 will be used.

Fixes #4640
2019-11-19 18:15:23 +01:00
badrAZ
8ccd4c269a feat(xo-web/logs): open bug even if support panel doesnt respond (#4654) 2019-11-19 16:10:29 +01:00
Julien Fontanet
934ec86f93 feat(vhd-lib/_readChunk): explicit error when read too much 2019-11-19 15:51:21 +01:00
Julien Fontanet
23be38b5fa chore(vhd-lib/_readChunk): dont clean twice on end 2019-11-19 15:49:56 +01:00
Rajaa.BARHTAOUI
fe7f74e46b feat(xo-web): store SortedTable param in URL (#4637)
Fixes #4542
2019-11-19 15:25:57 +01:00
Pierre Donias
a3fd78f8e2 fix(doc): web hooks menu link (#4665) 2019-11-19 10:30:44 +01:00
Pierre Donias
137bad6f7b feat(xo-server-web-hooks): web hooks on XO API calls (#3155)
Fixes #1946
2019-11-15 16:17:17 +01:00
Julien Fontanet
17df6fc764 fix(xen-api): Node warning promise rejected with non-error (#4659)
Not a real problems but triggered a lot of warnings which could be very verbose in application logs.
2019-11-15 15:42:03 +01:00
badrAZ
2e51c8a124 fix(xo-server/backup-ng): handle timeouts larger than 596h (#4663)
Fixes #4662
2019-11-15 15:06:57 +01:00
Julien Fontanet
5588a46366 feat(xo-server/proxies): use proxy config (#4655) 2019-11-15 14:44:03 +01:00
Julien Fontanet
a8122f9add feat(normalize-packages): replace vars in READMEs 2019-11-15 11:25:51 +01:00
Julien Fontanet
5568be91d2 chore: format with Prettier
Related to 2ec964178
2019-11-15 11:05:30 +01:00
Julien Fontanet
a04bd6f93c fix(docs/configuration): convert redirectToHttps to TOML 2019-11-14 15:32:24 +01:00
Julien Fontanet
56d63b10e4 fix(xo-server/proxies): correctly handle errors 2019-11-14 15:31:55 +01:00
Julien Fontanet
2c97643b10 feat(xo-server/Xapi#_assertHealthyVdiChains): configurable tolerance (#4651)
Fixes #4124

Related to xoa-support#1921

Only configurable globally (not per backup job) for the moment with the `xapiOptions.maxUncoalescedVdis` setting.
2019-11-14 14:34:32 +01:00
marcpezin
679f403648 feat(docs/xoa): UI support page (#4650) 2019-11-14 13:10:00 +01:00
Julien Fontanet
d482c707f6 chore(docs/xoa): various improvements 2019-11-14 12:30:47 +01:00
Julien Fontanet
2ec9641783 chore: update dependencies 2019-11-13 17:15:42 +01:00
badrAZ
dab1788a3b feat(logs/backup-ng): support ticket with attached log (#4201)
Related to support-panel#24
2019-11-13 15:57:52 +01:00
Julien Fontanet
47bb79cce1 feat(server/Xo): remove #httpProxy in favor of `#hasOwnHttpProxy 2019-11-12 16:51:50 +01:00
badrAZ
41dbc20be9 fix(xo-server/plugins): try empty config if none provided (#4647)
Required by #4581

Plugins with optional configuration should be loadable without explicit user configuration.
2019-11-08 15:27:22 +01:00
Julien Fontanet
10a631ec96 fix(log/README): catchGlobalErrors expect a logger 2019-11-08 11:07:42 +01:00
Julien Fontanet
830e5aed96 feat(log/README): { createLogger }
Related to 0b90befda
2019-11-08 09:58:15 +01:00
Julien Fontanet
7db573885b fix(cron/tests): mock Date.now() (#4628)
Related to changes in #4623 (and 07526efe6).
2019-11-07 11:37:53 +01:00
Julien Fontanet
a74d56ebc6 feat: always specify required Node version 2019-11-07 10:18:05 +01:00
Julien Fontanet
ff7d84297e fix(server/plugins): prevent duplicate loading (#4645)
Related to xoa-support#1905
2019-11-06 15:15:13 +01:00
Julien Fontanet
3a76509fe9 chore(xo-server/xapi): use promise-toolbox/retry (#4635)
Instead of custom implementation.
2019-11-06 10:23:06 +01:00
Julien Fontanet
ac4de9ab0f chore(backups-cli): move clean-vms in dedicated module 2019-11-05 15:29:49 +01:00
badrAZ
471f397418 feat(xo-server/backup-ng): clear "_listVmBackupsOnRemote" cache on change (#4580)
See #4539

This PR gives the possibility to clear the backup list cache using the API and it makes sure that the cache is cleaned after a backup or its deletion on a remote.
2019-11-05 09:44:50 +01:00
badrAZ
73bbdf6d4e fix(xo-web): missing "noopener noreferrer" (#4643)
Anchors with `target='_blank'` need `rel='noopener noreferrer'`
2019-11-04 15:24:01 +01:00
Julien Fontanet
7f26aea585 feat(backups-cli): dont report compressed XVAs as broken (#4642)
Detect compressed XVAs and don't try to validate them with tar heuristic.
2019-11-04 14:33:50 +01:00
Julien Fontanet
1c767b709f chore(xo-server/setUpApi): fix void lint 2019-10-31 16:45:17 +01:00
badrAZ
0ced82c885 fix(xo-web/sorted-table): missing "noopener noreferrer" (#4636) 2019-10-31 11:05:50 +01:00
BenjiReis
21dd195b0d fix(xo-web): prevent private network creation on bond slave PIFs (#4633)
Fixes xcp-ng/xcp#300
2019-10-30 17:17:18 +01:00
badrAZ
6aa6cfba8e fix(xo-server): failed metadata backup reported as successful (#4598)
Fixes #4596
2019-10-30 17:08:56 +01:00
Julien Fontanet
fd7d52d38b chore: update dependencies 2019-10-30 14:06:50 +01:00
Julien Fontanet
a47bb14364 chore(vhd-lib): prefix private methods with _ (#4621)
So they are clearly identified, which will help us with a future refactorization.
2019-10-30 10:18:29 +01:00
Julien Fontanet
d6e6fa5735 chore: use Function#bind instead of lodash/bind (#4602) (#4614)
Support is good enough: https://kangax.github.io/compat-table/es5/#test-Function.prototype.bind
2019-10-30 09:50:15 +01:00
Rajaa.BARHTAOUI
46da11a52e chore(CHANGELOG): 5.40.1 (#4632) 2019-10-29 15:24:05 +01:00
Pierre Donias
68e3dc21e4 fix(xo-web): update checks for Cloud plugin (#4631)
See #4615
Introduced by fd06374365

- Every time we checked if the Cloud plugin was installed, we now need to check
  if the XOA plugin is installed
- Update the warning messages mentioning the Cloud plugin
2019-10-29 14:25:49 +01:00
Julien Fontanet
7232cc45b4 feat(scripts/bump-pkg): dont create version tags 2019-10-29 13:59:27 +01:00
Julien Fontanet
be5a297248 feat(xo-server/Xo#httpProxy): expose 2019-10-29 12:01:16 +01:00
Rajaa.BARHTAOUI
257031b1bc chore(CHANGELOG): 5.40.0 (#4630) 2019-10-29 10:36:25 +01:00
Julien Fontanet
c9db9fa17a fix(vhd-lib/Vhd#readHeaderAndFooter): always throw AssertionErrors 2019-10-28 15:58:22 +01:00
Julien Fontanet
13f961a422 feat(server): new setHttpProxy internal method
Related to xoa#38
2019-10-28 14:20:07 +01:00
Rajaa.BARHTAOUI
3b38e0c4e1 chore: patch release (#4629) 2019-10-25 16:40:36 +02:00
Julien Fontanet
07526efe61 fix(cron): dont forget to compute next date
Introduced by #4626
2019-10-25 15:49:05 +02:00
badrAZ
8753c02adb chore: technical release (#4627) 2019-10-25 15:47:40 +02:00
Julien Fontanet
6a0bbfa447 chore(log): always build on install
Necessary because used by other modules during their tests.
2019-10-25 13:29:21 +02:00
204 changed files with 4537 additions and 2821 deletions

View File

@@ -8,5 +8,8 @@
"directory": "@xen-orchestra/babel-config",
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"engines": {
"node": ">=6"
}
}

View File

@@ -0,0 +1,32 @@
const getopts = require('getopts')
const { version } = require('./package.json')
module.exports = commands =>
async function(args, prefix) {
const opts = getopts(args, {
alias: {
help: 'h',
},
boolean: ['help'],
stopEarly: true,
})
const commandName = opts.help || args.length === 0 ? 'help' : args[0]
const command = commands[commandName]
if (command === undefined) {
process.stdout.write(`Usage:
${Object.keys(commands)
.filter(command => command !== 'help')
.map(command => ` ${prefix} ${command} ${commands[command].usage || ''}`)
.join('\n\n')}
xo-backups v${version}
`)
process.exitCode = commandName === 'help' ? 0 : 1
return
}
return command.main(args.slice(1), prefix + ' ' + commandName)
}

View File

@@ -0,0 +1,393 @@
#!/usr/bin/env node
// assigned when options are parsed by the main function
let force
// -----------------------------------------------------------------------------
const assert = require('assert')
const getopts = require('getopts')
const lockfile = require('proper-lockfile')
const { default: Vhd } = require('vhd-lib')
const { curryRight, flatten } = require('lodash')
const { dirname, resolve } = require('path')
const { DISK_TYPE_DIFFERENCING } = require('vhd-lib/dist/_constants')
const { pipe, promisifyAll } = require('promise-toolbox')
const fs = promisifyAll(require('fs'))
const handler = require('@xen-orchestra/fs').getHandler({ url: 'file://' })
// -----------------------------------------------------------------------------
const asyncMap = curryRight((iterable, fn) =>
Promise.all(
Array.isArray(iterable) ? iterable.map(fn) : Array.from(iterable, fn)
)
)
const filter = (...args) => thisArg => thisArg.filter(...args)
const isGzipFile = async fd => {
// https://tools.ietf.org/html/rfc1952.html#page-5
const magicNumber = Buffer.allocUnsafe(2)
assert.strictEqual(
await fs.read(fd, magicNumber, 0, magicNumber.length, 0),
magicNumber.length
)
return magicNumber[0] === 31 && magicNumber[1] === 139
}
// TODO: better check?
//
// our heuristic is not good enough, there has been some false positives
// (detected as invalid by us but valid by `tar` and imported with success),
// either THOUGH THEY MAY HAVE BEEN COMPRESSED FILES:
// - these files were normal but the check is incorrect
// - these files were invalid but without data loss
// - these files were invalid but with silent data loss
//
// maybe reading the end of the file looking for a file named
// /^Ref:\d+/\d+\.checksum$/ and then validating the tar structure from it
//
// https://github.com/npm/node-tar/issues/234#issuecomment-538190295
const isValidTar = async (size, fd) => {
if (size <= 1024 || size % 512 !== 0) {
return false
}
const buf = Buffer.allocUnsafe(1024)
assert.strictEqual(
await fs.read(fd, buf, 0, buf.length, size - buf.length),
buf.length
)
return buf.every(_ => _ === 0)
}
// TODO: find an heuristic for compressed files
const isValidXva = async path => {
try {
const fd = await fs.open(path, 'r')
try {
const { size } = await fs.fstat(fd)
if (size < 20) {
// neither a valid gzip not tar
return false
}
return (await isGzipFile(fd))
? true // gzip files cannot be validated at this time
: await isValidTar(size, fd)
} finally {
fs.close(fd).catch(noop)
}
} catch (error) {
// never throw, log and report as valid to avoid side effects
console.error('isValidXva', path, error)
return true
}
}
const noop = Function.prototype
const readDir = path =>
fs.readdir(path).then(
entries => {
entries.forEach((entry, i) => {
entries[i] = `${path}/${entry}`
})
return entries
},
error => {
// a missing dir is by definition empty
if (error != null && error.code === 'ENOENT') {
return []
}
throw error
}
)
// -----------------------------------------------------------------------------
// chain is an array of VHDs from child to parent
//
// the whole chain will be merged into parent, parent will be renamed to child
// and all the others will deleted
async function mergeVhdChain(chain) {
assert(chain.length >= 2)
const child = chain[0]
const parent = chain[chain.length - 1]
const children = chain.slice(0, -1).reverse()
console.warn('Unused parents of VHD', child)
chain
.slice(1)
.reverse()
.forEach(parent => {
console.warn(' ', parent)
})
force && console.warn(' merging…')
console.warn('')
if (force) {
// `mergeVhd` does not work with a stream, either
// - make it accept a stream
// - or create synthetic VHD which is not a stream
return console.warn('TODO: implement merge')
// await mergeVhd(
// handler,
// parent,
// handler,
// children.length === 1
// ? child
// : await createSyntheticStream(handler, children)
// )
}
await Promise.all([
force && fs.rename(parent, child),
asyncMap(children.slice(0, -1), child => {
console.warn('Unused VHD', child)
force && console.warn(' deleting…')
console.warn('')
return force && handler.unlink(child)
}),
])
}
const listVhds = pipe([
vmDir => vmDir + '/vdis',
readDir,
asyncMap(readDir),
flatten,
asyncMap(readDir),
flatten,
filter(_ => _.endsWith('.vhd')),
])
async function handleVm(vmDir) {
const vhds = new Set()
const vhdParents = { __proto__: null }
const vhdChildren = { __proto__: null }
// remove broken VHDs
await asyncMap(await listVhds(vmDir), async path => {
try {
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
vhds.add(path)
if (vhd.footer.diskType === DISK_TYPE_DIFFERENCING) {
const parent = resolve(dirname(path), vhd.header.parentUnicodeName)
vhdParents[path] = parent
if (parent in vhdChildren) {
const error = new Error(
'this script does not support multiple VHD children'
)
error.parent = parent
error.child1 = vhdChildren[parent]
error.child2 = path
throw error // should we throw?
}
vhdChildren[parent] = path
}
} catch (error) {
console.warn('Error while checking VHD', path)
console.warn(' ', error)
if (error != null && error.code === 'ERR_ASSERTION') {
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(path))
}
}
})
// remove VHDs with missing ancestors
{
const deletions = []
// return true if the VHD has been deleted or is missing
const deleteIfOrphan = vhd => {
const parent = vhdParents[vhd]
if (parent === undefined) {
return
}
// no longer needs to be checked
delete vhdParents[vhd]
deleteIfOrphan(parent)
if (!vhds.has(parent)) {
vhds.delete(vhd)
console.warn('Error while checking VHD', vhd)
console.warn(' missing parent', parent)
force && console.warn(' deleting…')
console.warn('')
force && deletions.push(handler.unlink(vhd))
}
}
// > A property that is deleted before it has been visited will not be
// > visited later.
// >
// > -- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in#Deleted_added_or_modified_properties
for (const child in vhdParents) {
deleteIfOrphan(child)
}
await Promise.all(deletions)
}
const [jsons, xvas] = await readDir(vmDir).then(entries => [
entries.filter(_ => _.endsWith('.json')),
new Set(entries.filter(_ => _.endsWith('.xva'))),
])
await asyncMap(xvas, async path => {
// check is not good enough to delete the file, the best we can do is report
// it
if (!(await isValidXva(path))) {
console.warn('Potential broken XVA', path)
console.warn('')
}
})
const unusedVhds = new Set(vhds)
const unusedXvas = new Set(xvas)
// compile the list of unused XVAs and VHDs, and remove backup metadata which
// reference a missing XVA/VHD
await asyncMap(jsons, async json => {
const metadata = JSON.parse(await fs.readFile(json))
const { mode } = metadata
if (mode === 'full') {
const linkedXva = resolve(vmDir, metadata.xva)
if (xvas.has(linkedXva)) {
unusedXvas.delete(linkedXva)
} else {
console.warn('Error while checking backup', json)
console.warn(' missing file', linkedXva)
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(json))
}
} else if (mode === 'delta') {
const linkedVhds = (() => {
const { vhds } = metadata
return Object.keys(vhds).map(key => resolve(vmDir, vhds[key]))
})()
// FIXME: find better approach by keeping as much of the backup as
// possible (existing disks) even if one disk is missing
if (linkedVhds.every(_ => vhds.has(_))) {
linkedVhds.forEach(_ => unusedVhds.delete(_))
} else {
console.warn('Error while checking backup', json)
const missingVhds = linkedVhds.filter(_ => !vhds.has(_))
console.warn(
' %i/%i missing VHDs',
missingVhds.length,
linkedVhds.length
)
missingVhds.forEach(vhd => {
console.warn(' ', vhd)
})
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(json))
}
}
})
// TODO: parallelize by vm/job/vdi
const unusedVhdsDeletion = []
{
// VHD chains (as list from child to ancestor) to merge indexed by last
// ancestor
const vhdChainsToMerge = { __proto__: null }
const toCheck = new Set(unusedVhds)
const getUsedChildChainOrDelete = vhd => {
if (vhd in vhdChainsToMerge) {
const chain = vhdChainsToMerge[vhd]
delete vhdChainsToMerge[vhd]
return chain
}
if (!unusedVhds.has(vhd)) {
return [vhd]
}
// no longer needs to be checked
toCheck.delete(vhd)
const child = vhdChildren[vhd]
if (child !== undefined) {
const chain = getUsedChildChainOrDelete(child)
if (chain !== undefined) {
chain.push(vhd)
return chain
}
}
console.warn('Unused VHD', vhd)
force && console.warn(' deleting…')
console.warn('')
force && unusedVhdsDeletion.push(handler.unlink(vhd))
}
toCheck.forEach(vhd => {
vhdChainsToMerge[vhd] = getUsedChildChainOrDelete(vhd)
})
Object.keys(vhdChainsToMerge).forEach(key => {
const chain = vhdChainsToMerge[key]
if (chain !== undefined) {
unusedVhdsDeletion.push(mergeVhdChain(chain))
}
})
}
await Promise.all([
unusedVhdsDeletion,
asyncMap(unusedXvas, path => {
console.warn('Unused XVA', path)
force && console.warn(' deleting…')
console.warn('')
return force && handler.unlink(path)
}),
])
}
// -----------------------------------------------------------------------------
module.exports = async function main(args) {
const opts = getopts(args, {
alias: {
force: 'f',
},
boolean: ['force'],
default: {
force: false,
},
})
;({ force } = opts)
await asyncMap(opts._, async vmDir => {
vmDir = resolve(vmDir)
// TODO: implement this in `xo-server`, not easy because not compatible with
// `@xen-orchestra/fs`.
const release = await lockfile.lock(vmDir)
try {
await handleVm(vmDir)
} catch (error) {
console.error('handleVm', vmDir, error)
} finally {
await release()
}
})
}

View File

@@ -1,378 +1,13 @@
#!/usr/bin/env node
const args = process.argv.slice(2)
if (
args.length === 0 ||
/^(?:-h|--help)$/.test(args[0]) ||
args[0] !== 'clean-vms'
) {
console.log('Usage: xo-backups clean-vms [--force] xo-vm-backups/*')
// eslint-disable-next-line no-process-exit
return process.exit(1)
}
// remove `clean-vms` arg which is the only available command ATM
args.splice(0, 1)
// only act (ie delete files) if `--force` is present
const force = args[0] === '--force'
if (force) {
args.splice(0, 1)
}
// -----------------------------------------------------------------------------
const assert = require('assert')
const lockfile = require('proper-lockfile')
const { default: Vhd } = require('vhd-lib')
const { curryRight, flatten } = require('lodash')
const { dirname, resolve } = require('path')
const { DISK_TYPE_DIFFERENCING } = require('vhd-lib/dist/_constants')
const { pipe, promisifyAll } = require('promise-toolbox')
const fs = promisifyAll(require('fs'))
const handler = require('@xen-orchestra/fs').getHandler({ url: 'file://' })
// -----------------------------------------------------------------------------
const asyncMap = curryRight((iterable, fn) =>
Promise.all(
Array.isArray(iterable) ? iterable.map(fn) : Array.from(iterable, fn)
)
)
const filter = (...args) => thisArg => thisArg.filter(...args)
// TODO: better check?
// our heuristic is not good enough, there has been some false positives
// (detected as invalid by us but valid by `tar` and imported with success),
// either:
// - these files were normal but the check is incorrect
// - these files were invalid but without data loss
// - these files were invalid but with silent data loss
//
// FIXME: the heuristic does not work if the XVA is compressed, we need to
// implement a specific test for it
//
// maybe reading the end of the file looking for a file named
// /^Ref:\d+/\d+\.checksum$/ and then validating the tar structure from it
//
// https://github.com/npm/node-tar/issues/234#issuecomment-538190295
const isValidTar = async path => {
try {
const fd = await fs.open(path, 'r')
try {
const { size } = await fs.fstat(fd)
if (size <= 1024 || size % 512 !== 0) {
return false
}
const buf = Buffer.allocUnsafe(1024)
assert.strictEqual(
await fs.read(fd, buf, 0, buf.length, size - buf.length),
buf.length
)
return buf.every(_ => _ === 0)
} finally {
fs.close(fd).catch(noop)
}
} catch (error) {
// never throw, log and report as valid to avoid side effects
console.error('isValidTar', path, error)
return true
}
}
const noop = Function.prototype
const readDir = path =>
fs.readdir(path).then(
entries => {
entries.forEach((entry, i) => {
entries[i] = `${path}/${entry}`
})
return entries
require('./_composeCommands')({
'clean-vms': {
get main() {
return require('./commands/clean-vms')
},
error => {
// a missing dir is by definition empty
if (error != null && error.code === 'ENOENT') {
return []
}
throw error
}
)
// -----------------------------------------------------------------------------
// chain is an array of VHDs from child to parent
//
// the whole chain will be merged into parent, parent will be renamed to child
// and all the others will deleted
async function mergeVhdChain(chain) {
assert(chain.length >= 2)
const child = chain[0]
const parent = chain[chain.length - 1]
const children = chain.slice(0, -1).reverse()
console.warn('Unused parents of VHD', child)
chain
.slice(1)
.reverse()
.forEach(parent => {
console.warn(' ', parent)
})
force && console.warn(' merging…')
console.warn('')
if (force) {
// `mergeVhd` does not work with a stream, either
// - make it accept a stream
// - or create synthetic VHD which is not a stream
return console.warn('TODO: implement merge')
// await mergeVhd(
// handler,
// parent,
// handler,
// children.length === 1
// ? child
// : await createSyntheticStream(handler, children)
// )
}
await Promise.all([
force && fs.rename(parent, child),
asyncMap(children.slice(0, -1), child => {
console.warn('Unused VHD', child)
force && console.warn(' deleting…')
console.warn('')
return force && handler.unlink(child)
}),
])
}
const listVhds = pipe([
vmDir => vmDir + '/vdis',
readDir,
asyncMap(readDir),
flatten,
asyncMap(readDir),
flatten,
filter(_ => _.endsWith('.vhd')),
])
async function handleVm(vmDir) {
const vhds = new Set()
const vhdParents = { __proto__: null }
const vhdChildren = { __proto__: null }
// remove broken VHDs
await asyncMap(await listVhds(vmDir), async path => {
try {
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
vhds.add(path)
if (vhd.footer.diskType === DISK_TYPE_DIFFERENCING) {
const parent = resolve(dirname(path), vhd.header.parentUnicodeName)
vhdParents[path] = parent
if (parent in vhdChildren) {
const error = new Error(
'this script does not support multiple VHD children'
)
error.parent = parent
error.child1 = vhdChildren[parent]
error.child2 = path
throw error // should we throw?
}
vhdChildren[parent] = path
}
} catch (error) {
console.warn('Error while checking VHD', path)
console.warn(' ', error)
if (error != null && error.code === 'ERR_ASSERTION') {
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(path))
}
}
})
// remove VHDs with missing ancestors
{
const deletions = []
// return true if the VHD has been deleted or is missing
const deleteIfOrphan = vhd => {
const parent = vhdParents[vhd]
if (parent === undefined) {
return
}
// no longer needs to be checked
delete vhdParents[vhd]
deleteIfOrphan(parent)
if (!vhds.has(parent)) {
vhds.delete(vhd)
console.warn('Error while checking VHD', vhd)
console.warn(' missing parent', parent)
force && console.warn(' deleting…')
console.warn('')
force && deletions.push(handler.unlink(vhd))
}
}
// > A property that is deleted before it has been visited will not be
// > visited later.
// >
// > -- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in#Deleted_added_or_modified_properties
for (const child in vhdParents) {
deleteIfOrphan(child)
}
await Promise.all(deletions)
}
const [jsons, xvas] = await readDir(vmDir).then(entries => [
entries.filter(_ => _.endsWith('.json')),
new Set(entries.filter(_ => _.endsWith('.xva'))),
])
await asyncMap(xvas, async path => {
// check is not good enough to delete the file, the best we can do is report
// it
if (!(await isValidTar(path))) {
console.warn('Potential broken XVA', path)
console.warn('')
}
})
const unusedVhds = new Set(vhds)
const unusedXvas = new Set(xvas)
// compile the list of unused XVAs and VHDs, and remove backup metadata which
// reference a missing XVA/VHD
await asyncMap(jsons, async json => {
const metadata = JSON.parse(await fs.readFile(json))
const { mode } = metadata
if (mode === 'full') {
const linkedXva = resolve(vmDir, metadata.xva)
if (xvas.has(linkedXva)) {
unusedXvas.delete(linkedXva)
} else {
console.warn('Error while checking backup', json)
console.warn(' missing file', linkedXva)
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(json))
}
} else if (mode === 'delta') {
const linkedVhds = (() => {
const { vhds } = metadata
return Object.keys(vhds).map(key => resolve(vmDir, vhds[key]))
})()
// FIXME: find better approach by keeping as much of the backup as
// possible (existing disks) even if one disk is missing
if (linkedVhds.every(_ => vhds.has(_))) {
linkedVhds.forEach(_ => unusedVhds.delete(_))
} else {
console.warn('Error while checking backup', json)
const missingVhds = linkedVhds.filter(_ => !vhds.has(_))
console.warn(
' %i/%i missing VHDs',
missingVhds.length,
linkedVhds.length
)
missingVhds.forEach(vhd => {
console.warn(' ', vhd)
})
force && console.warn(' deleting…')
console.warn('')
force && (await handler.unlink(json))
}
}
})
// TODO: parallelize by vm/job/vdi
const unusedVhdsDeletion = []
{
// VHD chains (as list from child to ancestor) to merge indexed by last
// ancestor
const vhdChainsToMerge = { __proto__: null }
const toCheck = new Set(unusedVhds)
const getUsedChildChainOrDelete = vhd => {
if (vhd in vhdChainsToMerge) {
const chain = vhdChainsToMerge[vhd]
delete vhdChainsToMerge[vhd]
return chain
}
if (!unusedVhds.has(vhd)) {
return [vhd]
}
// no longer needs to be checked
toCheck.delete(vhd)
const child = vhdChildren[vhd]
if (child !== undefined) {
const chain = getUsedChildChainOrDelete(child)
if (chain !== undefined) {
chain.push(vhd)
return chain
}
}
console.warn('Unused VHD', vhd)
force && console.warn(' deleting…')
console.warn('')
force && unusedVhdsDeletion.push(handler.unlink(vhd))
}
toCheck.forEach(vhd => {
vhdChainsToMerge[vhd] = getUsedChildChainOrDelete(vhd)
})
Object.keys(vhdChainsToMerge).forEach(key => {
const chain = vhdChainsToMerge[key]
if (chain !== undefined) {
unusedVhdsDeletion.push(mergeVhdChain(chain))
}
})
}
await Promise.all([
unusedVhdsDeletion,
asyncMap(unusedXvas, path => {
console.warn('Unused XVA', path)
force && console.warn(' deleting…')
console.warn('')
return force && handler.unlink(path)
}),
])
}
// -----------------------------------------------------------------------------
asyncMap(args, async vmDir => {
vmDir = resolve(vmDir)
// TODO: implement this in `xo-server`, not easy because not compatible with
// `@xen-orchestra/fs`.
const release = await lockfile.lock(vmDir)
try {
await handleVm(vmDir)
} catch (error) {
console.error('handleVm', vmDir, error)
} finally {
await release()
}
}).catch(error => console.error('main', error))
usage: '[--force] xo-vm-backups/*',
},
})(process.argv.slice(2), 'xo-backups').catch(error => {
console.error('main', error)
process.exitCode = 1
})

View File

@@ -4,11 +4,12 @@
},
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/fs": "^0.10.1",
"@xen-orchestra/fs": "^0.10.2",
"getopts": "^2.2.5",
"lodash": "^4.17.15",
"promise-toolbox": "^0.14.0",
"proper-lockfile": "^4.1.1",
"vhd-lib": "^0.7.0"
"vhd-lib": "^0.7.2"
},
"engines": {
"node": ">=7.10.1"

View File

@@ -16,7 +16,7 @@
},
"dependencies": {
"golike-defer": "^0.4.1",
"xen-api": "^0.27.2"
"xen-api": "^0.27.3"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/cron",
"version": "1.0.5",
"version": "1.0.6",
"license": "ISC",
"description": "Focused, well maintained, cron parser/scheduler",
"keywords": [

View File

@@ -39,8 +39,8 @@ class Job {
this._isRunning = false
if (this._isEnabled) {
const now = Date.now()
scheduledDate = +schedule._createDate()
const now = schedule._createDate()
scheduledDate = +next(schedule._schedule, now)
const delay = scheduledDate - now
this._timeout =
delay < MAX_DELAY

View File

@@ -2,12 +2,24 @@
import { createSchedule } from './'
const wrap = value => () => value
describe('issues', () => {
let originalDateNow
beforeAll(() => {
originalDateNow = Date.now
})
afterAll(() => {
Date.now = originalDateNow
originalDateNow = undefined
})
test('stop during async execution', async () => {
let nCalls = 0
let resolve, promise
const job = createSchedule('* * * * *').createJob(() => {
const schedule = createSchedule('* * * * *')
const job = schedule.createJob(() => {
++nCalls
// eslint-disable-next-line promise/param-names
@@ -18,6 +30,7 @@ describe('issues', () => {
})
job.start()
Date.now = wrap(+schedule.next(1)[0])
jest.runAllTimers()
expect(nCalls).toBe(1)
@@ -35,7 +48,8 @@ describe('issues', () => {
let nCalls = 0
let resolve, promise
const job = createSchedule('* * * * *').createJob(() => {
const schedule = createSchedule('* * * * *')
const job = schedule.createJob(() => {
++nCalls
// eslint-disable-next-line promise/param-names
@@ -46,6 +60,7 @@ describe('issues', () => {
})
job.start()
Date.now = wrap(+schedule.next(1)[0])
jest.runAllTimers()
expect(nCalls).toBe(1)
@@ -56,6 +71,7 @@ describe('issues', () => {
resolve()
await promise
Date.now = wrap(+schedule.next(1)[0])
jest.runAllTimers()
expect(nCalls).toBe(2)
})

View File

@@ -1,13 +1,13 @@
# ${pkg.name} [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
# @xen-orchestra/defined [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
> ${pkg.description}
## Install
Installation of the [npm package](https://npmjs.org/package/${pkg.name}):
Installation of the [npm package](https://npmjs.org/package/@xen-orchestra/defined):
```
> npm install --save ${pkg.name}
> npm install --save @xen-orchestra/defined
```
## Usage
@@ -40,10 +40,10 @@ the code.
You may:
- report any [issue](${pkg.bugs})
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
${pkg.license} © [${pkg.author.name}](${pkg.author.url})
ISC © [Vates SAS](https://vates.fr)

View File

@@ -62,10 +62,10 @@ the code.
You may:
- report any [issue](${pkg.bugs})
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
${pkg.license} © [${pkg.author.name}](${pkg.author.url})
ISC © [Vates SAS](https://vates.fr)

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/fs",
"version": "0.10.1",
"version": "0.10.2",
"license": "AGPL-3.0",
"description": "The File System for Xen Orchestra backups.",
"keywords": [],
@@ -18,16 +18,16 @@
"dist/"
],
"engines": {
"node": ">=6"
"node": ">=8.10"
},
"dependencies": {
"@marsaud/smb2": "^0.14.0",
"@sindresorhus/df": "^2.1.0",
"@sindresorhus/df": "^3.1.1",
"@xen-orchestra/async-map": "^0.0.0",
"decorator-synchronized": "^0.5.0",
"execa": "^1.0.0",
"execa": "^3.2.0",
"fs-extra": "^8.0.1",
"get-stream": "^4.0.0",
"get-stream": "^5.1.0",
"limit-concurrency-decorator": "^0.4.0",
"lodash": "^4.17.4",
"promise-toolbox": "^0.14.0",

View File

@@ -389,7 +389,7 @@ export default class RemoteHandlerAbstract {
async test(): Promise<Object> {
const SIZE = 1024 * 1024 * 10
const testFileName = normalizePath(`${Date.now()}.test`)
const data = await fromCallback(cb => randomBytes(SIZE, cb))
const data = await fromCallback(randomBytes, SIZE)
let step = 'write'
try {
const writeStart = process.hrtime()

View File

@@ -86,7 +86,7 @@ handlers.forEach(url => {
describe('#createOutputStream()', () => {
it('creates parent dir if missing', async () => {
const stream = await handler.createOutputStream('dir/file')
await fromCallback(cb => pipeline(createTestDataStream(), stream, cb))
await fromCallback(pipeline, createTestDataStream(), stream)
await expect(await handler.readFile('dir/file')).toEqual(TEST_DATA)
})
})
@@ -106,7 +106,7 @@ handlers.forEach(url => {
describe('#createWriteStream()', () => {
testWithFileDescriptor('file', 'wx', async ({ file, flags }) => {
const stream = await handler.createWriteStream(file, { flags })
await fromCallback(cb => pipeline(createTestDataStream(), stream, cb))
await fromCallback(pipeline, createTestDataStream(), stream)
await expect(await handler.readFile('file')).toEqual(TEST_DATA)
})

View File

@@ -47,8 +47,19 @@ export default class LocalHandler extends RemoteHandlerAbstract {
})
}
_getInfo() {
return df.file(this._getFilePath('/'))
async _getInfo() {
// df.file() resolves with an object with the following properties:
// filesystem, type, size, used, available, capacity and mountpoint.
// size, used, available and capacity may be `NaN` so we remove any `NaN`
// value from the object.
const info = await df.file(this._getFilePath('/'))
Object.keys(info).forEach(key => {
if (Number.isNaN(info[key])) {
delete info[key]
}
})
return info
}
async _getSize(file) {

View File

@@ -15,7 +15,7 @@ Installation of the [npm package](https://npmjs.org/package/@xen-orchestra/log):
Everywhere something should be logged:
```js
import createLogger from '@xen-orchestra/log'
import { createLogger } from '@xen-orchestra/log'
const log = createLogger('my-module')
@@ -42,6 +42,7 @@ log.error('could not join server', {
Then, at application level, configure the logs are handled:
```js
import { createLogger } from '@xen-orchestra/log'
import { configure, catchGlobalErrors } from '@xen-orchestra/log/configure'
import transportConsole from '@xen-orchestra/log/transports/console'
import transportEmail from '@xen-orchestra/log/transports/email'
@@ -77,8 +78,8 @@ configure([
])
// send all global errors (uncaught exceptions, warnings, unhandled rejections)
// to this transport
catchGlobalErrors(transport)
// to this logger
catchGlobalErrors(createLogger('app'))
```
### Transports

View File

@@ -48,7 +48,7 @@
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"prebuild": "yarn run clean",
"predev": "yarn run prebuild",
"prepublishOnly": "yarn run build",
"prepare": "yarn run build",
"postversion": "npm publish"
}
}

View File

@@ -1,13 +1,13 @@
# ${pkg.name} [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
# @xen-orchestra/mixin [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
> ${pkg.description}
## Install
Installation of the [npm package](https://npmjs.org/package/${pkg.name}):
Installation of the [npm package](https://npmjs.org/package/@xen-orchestra/mixin):
```
> npm install --save ${pkg.name}
> npm install --save @xen-orchestra/mixin
```
## Usage
@@ -40,10 +40,10 @@ the code.
You may:
- report any [issue](${pkg.bugs})
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
${pkg.license} © [${pkg.author.name}](${pkg.author.url})
ISC © [Vates SAS](https://vates.fr)

View File

@@ -4,14 +4,107 @@
### Enhancements
- [Backup NG] Make report recipients configurable in the backup settings [#4581](https://github.com/vatesfr/xen-orchestra/issues/4581) (PR [#4646](https://github.com/vatesfr/xen-orchestra/pull/4646))
- [SAML] Setting to disable requested authentication context (helps with _Active Directory_) (PR [#4675](https://github.com/vatesfr/xen-orchestra/pull/4675))
- The default sign-in page can be configured via `authentication.defaultSignInPage` (PR [#4678](https://github.com/vatesfr/xen-orchestra/pull/4678))
- [SR] Allow import of VHD and VMDK disks [#4137](https://github.com/vatesfr/xen-orchestra/issues/4137) (PR [#4138](https://github.com/vatesfr/xen-orchestra/pull/4138) )
- [Host] Advanced Live Telemetry (PR [#4680](https://github.com/vatesfr/xen-orchestra/pull/4680))
### Bug fixes
- [Metadata backup] Add 10 minutes timeout to avoid stuck jobs [#4657](https://github.com/vatesfr/xen-orchestra/issues/4657) (PR [#4666](https://github.com/vatesfr/xen-orchestra/pull/4666))
- [Metadata backups] Fix out-of-date listing for 1 minute due to cache (PR [#4672](https://github.com/vatesfr/xen-orchestra/pull/4672))
- [Delta backup] Limit the number of merged deltas per run to avoid interrupted jobs (PR [#4674](https://github.com/vatesfr/xen-orchestra/pull/4674))
### Released packages
- vhd-lib v0.7.2
- xo-vmdk-to-vhd v0.1.8
- xo-server-auth-ldap v0.6.6
- xo-server-auth-saml v0.7.0
- xo-server-backup-reports v0.16.4
- @xen-orchestra/fs v0.10.2
- xo-server v5.53.0
- xo-web v5.53.1
## **5.40.2** (2019-11-22)
![Channel: latest](https://badgen.net/badge/channel/latest/yellow)
### Enhancements
- [Logs] Ability to report a bug with attached log (PR [#4201](https://github.com/vatesfr/xen-orchestra/pull/4201))
- [Backup] Reduce _VDI chain protection error_ occurrence by being more tolerant (configurable via `xo-server`'s `xapiOptions.maxUncoalescedVdis` setting) [#4124](https://github.com/vatesfr/xen-orchestra/issues/4124) (PR [#4651](https://github.com/vatesfr/xen-orchestra/pull/4651))
- [Plugin] [Web hooks](https://xen-orchestra.com/docs/web-hooks.html) [#1946](https://github.com/vatesfr/xen-orchestra/issues/1946) (PR [#3155](https://github.com/vatesfr/xen-orchestra/pull/3155))
- [Tables] Always put the tables' search in the URL [#4542](https://github.com/vatesfr/xen-orchestra/issues/4542) (PR [#4637](https://github.com/vatesfr/xen-orchestra/pull/4637))
### Bug fixes
- [SDN controller] Prevent private network creation on bond slave PIF (Fixes https://github.com/xcp-ng/xcp/issues/300) (PR [4633](https://github.com/vatesfr/xen-orchestra/pull/4633))
- [Metadata backup] Fix failed backup reported as successful [#4596](https://github.com/vatesfr/xen-orchestra/issues/4596) (PR [#4598](https://github.com/vatesfr/xen-orchestra/pull/4598))
- [Backup NG] Fix "task cancelled" error when the backup job timeout exceeds 596 hours [#4662](https://github.com/vatesfr/xen-orchestra/issues/4662) (PR [#4663](https://github.com/vatesfr/xen-orchestra/pull/4663))
- Fix `promise rejected with non-error` warnings in logs (PR [#4659](https://github.com/vatesfr/xen-orchestra/pull/4659))
### Released packages
- xo-server-web-hooks v0.1.0
- xen-api v0.27.3
- xo-server-backup-reports v0.16.3
- vhd-lib v0.7.1
- xo-server v5.52.1
- xo-web v5.52.0
## **5.40.1** (2019-10-29)
### Bug fixes
- [XOSAN] Fix "Install Cloud plugin" warning (PR [#4631](https://github.com/vatesfr/xen-orchestra/pull/4631))
### Released packages
- xo-web v5.51.1
## **5.40.0** (2019-10-29)
### Breaking changes
- `xo-server` requires Node 8.
### Highlights
- [Backup NG] Offline backup feature [#3449](https://github.com/vatesfr/xen-orchestra/issues/3449) (PR [#4470](https://github.com/vatesfr/xen-orchestra/pull/4470))
- [Menu] Remove legacy backup entry [#4467](https://github.com/vatesfr/xen-orchestra/issues/4467) (PR [#4476](https://github.com/vatesfr/xen-orchestra/pull/4476))
- [Hub] Ability to update existing template (PR [#4613](https://github.com/vatesfr/xen-orchestra/pull/4613))
- [Support] Ability to open and close support tunnel from the user interface [#4513](https://github.com/vatesfr/xen-orchestra/issues/4513) (PR [#4616](https://github.com/vatesfr/xen-orchestra/pull/4616))
### Enhancements
- [Hub] Ability to select SR in hub VM installation (PR [#4571](https://github.com/vatesfr/xen-orchestra/pull/4571))
- [Hub] Display more info about downloadable templates (PR [#4593](https://github.com/vatesfr/xen-orchestra/pull/4593))
- [xo-server-transport-icinga2] Add support of [icinga2](https://icinga.com/docs/icinga2/latest/doc/12-icinga2-api/) for reporting services status [#4563](https://github.com/vatesfr/xen-orchestra/issues/4563) (PR [#4573](https://github.com/vatesfr/xen-orchestra/pull/4573))
### Bug fixes
- [SR] Fix `[object HTMLInputElement]` name after re-attaching a SR [#4546](https://github.com/vatesfr/xen-orchestra/issues/4546) (PR [#4550](https://github.com/vatesfr/xen-orchestra/pull/4550))
- [Schedules] Prevent double runs [#4625](https://github.com/vatesfr/xen-orchestra/issues/4625) (PR [#4626](https://github.com/vatesfr/xen-orchestra/pull/4626))
- [Schedules] Properly enable/disable on config import (PR [#4624](https://github.com/vatesfr/xen-orchestra/pull/4624))
### Released packages
- @xen-orchestra/cron v1.0.6
- xo-server-transport-icinga2 v0.1.0
- xo-server-sdn-controller v0.3.1
- xo-server v5.51.1
- xo-web v5.51.0
### Dropped packages
- xo-server-cloud : this package was useless for OpenSource installations because it required a complete XOA environment
## **5.39.1** (2019-10-11)
![Channel: latest](https://badgen.net/badge/channel/latest/yellow)
![Channel: stable](https://badgen.net/badge/channel/stable/green)
### Enhancements
@@ -82,8 +175,6 @@
## **5.38.0** (2019-08-29)
![Channel: stable](https://badgen.net/badge/channel/stable/green)
### Enhancements
- [VM/Attach disk] Display confirmation modal when VDI is already attached [#3381](https://github.com/vatesfr/xen-orchestra/issues/3381) (PR [#4366](https://github.com/vatesfr/xen-orchestra/pull/4366))

View File

@@ -3,29 +3,16 @@
> Keep in mind the changelog is addressed to **users** and should be
> understandable by them.
### Breaking changes
- `xo-server` requires Node 8.
### Enhancements
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [Hub] Ability to select SR in hub VM installation (PR [#4571](https://github.com/vatesfr/xen-orchestra/pull/4571))
- [Hub] Display more info about downloadable templates (PR [#4593](https://github.com/vatesfr/xen-orchestra/pull/4593))
- [Support] Ability to open and close support tunnel from the user interface [#4513](https://github.com/vatesfr/xen-orchestra/issues/4513) (PR [#4616](https://github.com/vatesfr/xen-orchestra/pull/4616))
- [xo-server-transport-icinga2] Add support of [icinga2](https://icinga.com/docs/icinga2/latest/doc/12-icinga2-api/) for reporting services status [#4563](https://github.com/vatesfr/xen-orchestra/issues/4563) (PR [#4573](https://github.com/vatesfr/xen-orchestra/pull/4573))
- [Hub] Ability to update existing template (PR [#4613](https://github.com/vatesfr/xen-orchestra/pull/4613))
- [Menu] Remove legacy backup entry [#4467](https://github.com/vatesfr/xen-orchestra/issues/4467) (PR [#4476](https://github.com/vatesfr/xen-orchestra/pull/4476))
- [Backup NG] Offline backup feature [#3449](https://github.com/vatesfr/xen-orchestra/issues/3449) (PR [#4470](https://github.com/vatesfr/xen-orchestra/pull/4470))
### Bug fixes
> Users must be able to say: “I had this issue, happy to know it's fixed”
- [SR] Fix `[object HTMLInputElement]` name after re-attaching a SR [#4546](https://github.com/vatesfr/xen-orchestra/issues/4546) (PR [#4550](https://github.com/vatesfr/xen-orchestra/pull/4550))
- [Schedules] Prevent double runs [#4625](https://github.com/vatesfr/xen-orchestra/issues/4625) (PR [#4626](https://github.com/vatesfr/xen-orchestra/pull/4626))
- [Schedules] Properly enable/disable on config import (PR [#4624](https://github.com/vatesfr/xen-orchestra/pull/4624))
- [Host] Fix Enable Live Telemetry button state (PR [#4686](https://github.com/vatesfr/xen-orchestra/pull/4686))
- [Host] Fix Advanced Live Telemetry URL (PR [#4687](https://github.com/vatesfr/xen-orchestra/pull/4687))
### Released packages
@@ -34,12 +21,5 @@
>
> Rule of thumb: add packages on top.
- @xen-orchestra/cron v1.0.5
- xo-server-transport-icinga2 v0.1.0
- xo-server-sdn-controller v0.3.1
- xo-server v5.51.0
- xo-web v5.51.0
### Dropped packages
- xo-server-cloud : this package was useless for OpenSource installations because it required a complete XOA environment
- xo-server v5.54.0
- xo-web v5.54.0

View File

@@ -51,6 +51,7 @@
* [Health](health.md)
* [Job manager](scheduler.md)
* [Alerts](alerts.md)
* [Web hooks](web-hooks.md)
* [Load balancing](load_balancing.md)
* [Emergency Shutdown](emergency_shutdown.md)
* [Auto scalability](auto_scalability.md)

View File

@@ -22,7 +22,7 @@ group = 'nogroup'
By default, XO-server listens on all addresses (0.0.0.0) and runs on port 80. If you need to, you can change this in the `# Basic HTTP` section:
```toml
host = '0.0.0.0'
hostname = '0.0.0.0'
port = 80
```
@@ -31,7 +31,7 @@ port = 80
XO-server can also run in HTTPS (you can run HTTP and HTTPS at the same time) - just modify what's needed in the `# Basic HTTPS` section, this time with the certificates/keys you need and their path:
```toml
host = '0.0.0.0'
hostname = '0.0.0.0'
port = 443
certificate = './certificate.pem'
key = './key.pem'
@@ -43,10 +43,10 @@ key = './key.pem'
If you want to redirect everything to HTTPS, you can modify the configuration like this:
```
```toml
# If set to true, all HTTP traffic will be redirected to the first HTTPs configuration.
redirectToHttps: true
redirectToHttps = true
```
This should be written just before the `mount` option, inside the `http:` block.

View File

@@ -65,17 +65,13 @@ Now you have to create a config file for `xo-server`:
```
$ cd packages/xo-server
$ cp sample.config.toml .xo-server.toml
$ mkdir -p ~/.config/xo-server
$ cp sample.config.toml ~/.config/xo-server/config.toml
```
Edit and uncomment it to have the right path to serve `xo-web`, because `xo-server` embeds an HTTP server (we assume that `xen-orchestra` and `xo-web` are in the same directory):
> Note: If you're installing `xo-server` as a global service, you may want to copy the file to `/etc/xo-server/config.toml` instead.
```toml
[http.mounts]
'/' = '../xo-web/dist/'
```
In this config file, you can also change default ports (80 and 443) for xo-server. If you are running the server as a non-root user, you will need to set the port to 1024 or higher.
In this config file, you can change default ports (80 and 443) for xo-server. If you are running the server as a non-root user, you will need to set the port to 1024 or higher.
You can try to start xo-server to see if it works. You should have something like this:
@@ -186,7 +182,7 @@ service redis start
## SUDO
If you are running `xo-server` as a non-root user, you need to use `sudo` to be able to mount NFS remotes. You can do this by editing `xo-server/.xo-server.toml` and setting `useSudo = true`. It's near the end of the file:
If you are running `xo-server` as a non-root user, you need to use `sudo` to be able to mount NFS remotes. You can do this by editing `xo-server` configuration file and setting `useSudo = true`. It's near the end of the file:
```
useSudo = true

72
docs/web-hooks.md Normal file
View File

@@ -0,0 +1,72 @@
# Web hooks
⚠ This feature is experimental!
## Configuration
The plugin "web-hooks" needs to be installed and loaded for this feature to work.
You can trigger an HTTP POST request to a URL when a Xen Orchestra API method is called.
* Go to Settings > Plugins > Web hooks
* Add new hooks
* For each hook, configure:
* Method: the XO API method that will trigger the HTTP request when called
* Type:
* pre: the request will be sent when the method is called
* post: the request will be sent after the method action is completed
* pre/post: both
* URL: the full URL which the requests will be sent to
* Save the plugin configuration
From now on, a request will be sent to the corresponding URLs when a configured method is called by an XO client.
## Request content
```
POST / HTTP/1.1
Content-Type: application/json
```
The request's body is a JSON string representing an object with the following properties:
- `type`: `"pre"` or `"post"`
- `callId`: unique ID for this call to help match a pre-call and a post-call
- `userId`: unique internal ID of the user who performed the call
- `userName`: login/e-mail address of the user who performed the call
- `method`: name of the method that was called (e.g. `"vm.start"`)
- `params`: call parameters (object)
- `timestamp`: epoch timestamp of the beginning ("pre") or end ("post") of the call in ms
- `duration`: duration of the call in ms ("post" hooks only)
- `result`: call result on success ("post" hooks only)
- `error`: call result on error ("post" hooks only)
## Request handling
*Quick Node.js example of how you may want to handle the requests*
```js
const http = require('http')
const { exec } = require('child_process')
http
.createServer((req, res) => {
let body = ''
req.on('data', chunk => {
body += chunk
})
req.on('end', () => handleHook(body))
res.end()
})
.listen(3000)
const handleHook = data => {
const { method, params, type, result, error, timestamp } = JSON.parse(data)
// Log it
console.log(`${new Date(timestamp).toISOString()} [${method}|${type}] ${params}${result || error}`)
// Run scripts
exec(`./hook-scripts/${method}-${type}.sh`)
}
```

View File

@@ -22,9 +22,9 @@ For use on huge infrastructure (more than 500+ VMs), feel free to increase the R
### The quickest way
The **fastest and most secure way** to install Xen Orchestra is to use our web deploy page. Go on https://xen-orchestra.com/#!/xoa and follow instructions.
The **fastest and most secure way** to install Xen Orchestra is to use our web deploy page. Go to https://xen-orchestra.com/#!/xoa and follow the instructions.
> **Note:** no data will be sent to our servers, it's running only between your browser and your host!
> **Note:** no data will be sent to our servers, the deployment only runs between your browser and your host!
![](./assets/deploy_form.png)
@@ -41,12 +41,12 @@ bash -c "$(curl -s http://xoa.io/deploy)"
Follow the instructions:
* Your IP configuration will be requested: it's set to **DHCP by default**, otherwise you can enter a fixed IP address (eg `192.168.0.10`)
* If DHCP is selected, the script will continue automatically. Otherwise a netmask, gateway, and DNS should be provided.
* If DHCP is selected, the script will continue automatically. Otherwise a netmask, gateway, and DNS server should be provided.
* XOA will be deployed on your default storage repository. You can move it elsewhere anytime after.
### Via download the XVA
### Via a manual XVA download
Download XOA from xen-orchestra.com. Once you've got the XVA file, you can import it with `xe vm-import filename=xoa_unified.xva` or via XenCenter.
You can also download XOA from xen-orchestra.com in an XVA file. Once you've got the XVA file, you can import it with `xe vm-import filename=xoa_unified.xva` or via XenCenter.
After the VM is imported, you just need to start it with `xe vm-start vm="XOA"` or with XenCenter.
@@ -64,6 +64,35 @@ Once you have started the VM, you can access the web UI by putting the IP you co
**The first thing** you need to do with your XOA is register. [Read the documentation on the page dedicated to the updater/register inferface](updater.md).
## Technical Support
In your appliance, you can access the support section in the XOA menu. In this section you can:
* launch an `xoa check` command
![](https://xen-orchestra.com/blog/content/images/2019/10/xoacheck.png)
* Open a secure support tunnel so our team can remotely investigate
![](https://user-images.githubusercontent.com/10992860/67384755-10f47f80-f592-11e9-974d-bbdefd0bf353.gif)
<a id="ssh-pro-support"></a>
If your web UI is not working, you can also open the secure support tunnel from the CLI. To open a private tunnel (we are the only one with the private key), you can use the command `xoa support tunnel` like below:
```
$ xoa support tunnel
The support tunnel has been created.
Do not stop this command before the intervention is over!
Give this id to the support: 40713
```
Give us this number, and we'll be able to access your XOA in a secure manner. Then, close the tunnel with `Ctrl+C` after your issue has been solved by support.
> The tunnel utilizes the user `xoa-support`. If you want to deactivate this bundled user, you can run `chage -E 0 xoa-support`. To re-activate this account, you must run `chage -E 1 xoa-support`.
### First console connection
If you connect via SSH or console, the default credentials are:
@@ -156,21 +185,6 @@ You can access the VM console through XenCenter or using VNC through a SSH tunne
If you want to go back in DHCP, just run `xoa network dhcp`
### SSH Pro Support
By default, if you need support, there is a dedicated user named `xoa-support`. We are the only one with the private key. If you want our assistance on your XOA, you can open a private tunnel with the command `xoa support tunnel` like below:
```
$ xoa support tunnel
The support tunnel has been created.
Do not stop this command before the intervention is over!
Give this id to the support: 40713
```
Give us this number, we'll be able to access your XOA in a secure manner. Then, close the tunnel with `Ctrl+C` after your issue has been solved by support.
> If you want to deactivate this bundled user, you can type `chage -E 0 xoa-support`. To re-activate this account, you must use the `chage -E 1 xoa-support`.
### Firewall

View File

@@ -17,7 +17,7 @@
"eslint-plugin-react": "^7.6.1",
"eslint-plugin-standard": "^4.0.0",
"exec-promise": "^0.7.0",
"flow-bin": "^0.109.0",
"flow-bin": "^0.112.0",
"globby": "^10.0.0",
"husky": "^3.0.0",
"jest": "^24.1.0",
@@ -60,6 +60,7 @@
"posttest": "scripts/run-script test",
"prepare": "scripts/run-script prepare",
"pretest": "eslint --ignore-path .gitignore .",
"prettify": "prettier --ignore-path .gitignore --write '**/*.{js,jsx,md,mjs,ts,tsx}'",
"test": "jest \"^(?!.*\\.integ\\.spec\\.js$)\"",
"test-integration": "jest \".integ\\.spec\\.js$\"",
"travis-tests": "scripts/travis-tests"

View File

@@ -24,15 +24,15 @@
"dist/"
],
"engines": {
"node": ">=6"
"node": ">=8.10"
},
"dependencies": {
"@xen-orchestra/fs": "^0.10.1",
"@xen-orchestra/fs": "^0.10.2",
"cli-progress": "^3.1.0",
"exec-promise": "^0.7.0",
"getopts": "^2.2.3",
"struct-fu": "^1.2.0",
"vhd-lib": "^0.7.0"
"vhd-lib": "^0.7.2"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
@@ -40,7 +40,7 @@
"@babel/preset-env": "^7.0.0",
"babel-plugin-lodash": "^3.3.2",
"cross-env": "^6.0.3",
"execa": "^2.0.2",
"execa": "^3.2.0",
"index-modules": "^0.3.0",
"promise-toolbox": "^0.14.0",
"rimraf": "^3.0.0",

View File

@@ -1,6 +1,6 @@
{
"name": "vhd-lib",
"version": "0.7.0",
"version": "0.7.2",
"license": "AGPL-3.0",
"description": "Primitives for VHD file handling",
"keywords": [],
@@ -18,7 +18,7 @@
"dist/"
],
"engines": {
"node": ">=6"
"node": ">=8.10"
},
"dependencies": {
"@xen-orchestra/log": "^0.2.0",
@@ -28,6 +28,7 @@
"fs-extra": "^8.0.1",
"limit-concurrency-decorator": "^0.4.0",
"promise-toolbox": "^0.14.0",
"lodash": "^4.17.4",
"struct-fu": "^1.2.0",
"uuid": "^3.0.1"
},
@@ -36,10 +37,10 @@
"@babel/core": "^7.0.0",
"@babel/preset-env": "^7.0.0",
"@babel/preset-flow": "^7.0.0",
"@xen-orchestra/fs": "^0.10.1",
"@xen-orchestra/fs": "^0.10.2",
"babel-plugin-lodash": "^3.3.2",
"cross-env": "^6.0.3",
"execa": "^2.0.2",
"execa": "^3.2.0",
"fs-promise": "^2.0.0",
"get-stream": "^5.1.0",
"index-modules": "^0.3.0",

View File

@@ -17,10 +17,7 @@ export default async function readChunk(stream, n) {
resolve(Buffer.concat(chunks, i))
}
function onEnd() {
resolve2()
clean()
}
const onEnd = resolve2
function onError(error) {
reject(error)
@@ -34,8 +31,11 @@ export default async function readChunk(stream, n) {
}
i += chunk.length
chunks.push(chunk)
if (i >= n) {
if (i === n) {
resolve2()
} else if (i > n) {
throw new RangeError(`read (${i}) more than expected (${n})`)
}
}

View File

@@ -29,13 +29,13 @@ export default asyncIteratorToStream(async function*(size, blockParser) {
let next
while ((next = await blockParser.next()) !== null) {
const paddingLength = next.offsetBytes - position
const paddingLength = next.logicalAddressBytes - position
if (paddingLength < 0) {
throw new Error('Received out of order blocks')
}
yield* filePadding(paddingLength)
yield next.data
position = next.offsetBytes + next.data.length
position = next.logicalAddressBytes + next.data.length
}
yield* filePadding(actualSize - position)
yield footer

View File

@@ -1,5 +1,6 @@
import assert from 'assert'
import asyncIteratorToStream from 'async-iterator-to-stream'
import { forEachRight } from 'lodash'
import computeGeometryForSize from './_computeGeometryForSize'
import { createFooter, createHeader } from './_createFooterHeader'
@@ -17,38 +18,65 @@ import { set as setBitmap } from './_bitmap'
const VHD_BLOCK_SIZE_SECTORS = VHD_BLOCK_SIZE_BYTES / SECTOR_SIZE
/**
* Looks once backwards to collect the last fragment of each VHD block (they could be interleaved),
* then allocates the blocks in a forwards pass.
* @returns currentVhdPositionSector the first free sector after the data
*/
function createBAT(
firstBlockPosition,
blockAddressList,
fragmentLogicAddressList,
ratio,
bat,
bitmapSize
) {
let currentVhdPositionSector = firstBlockPosition / SECTOR_SIZE
blockAddressList.forEach(blockPosition => {
assert.strictEqual(blockPosition % SECTOR_SIZE, 0)
const vhdTableIndex = Math.floor(blockPosition / VHD_BLOCK_SIZE_BYTES)
if (bat.readUInt32BE(vhdTableIndex * 4) === BLOCK_UNUSED) {
bat.writeUInt32BE(currentVhdPositionSector, vhdTableIndex * 4)
currentVhdPositionSector +=
(bitmapSize + VHD_BLOCK_SIZE_BYTES) / SECTOR_SIZE
const lastFragmentPerBlock = new Map()
forEachRight(fragmentLogicAddressList, fragmentLogicAddress => {
assert.strictEqual(fragmentLogicAddress % SECTOR_SIZE, 0)
const vhdTableIndex = Math.floor(
fragmentLogicAddress / VHD_BLOCK_SIZE_BYTES
)
if (!lastFragmentPerBlock.has(vhdTableIndex)) {
lastFragmentPerBlock.set(vhdTableIndex, fragmentLogicAddress)
}
})
return currentVhdPositionSector
const lastFragmentPerBlockArray = [...lastFragmentPerBlock]
// lastFragmentPerBlock is from last to first, so we go the other way around
forEachRight(
lastFragmentPerBlockArray,
([vhdTableIndex, _fragmentVirtualAddress]) => {
if (bat.readUInt32BE(vhdTableIndex * 4) === BLOCK_UNUSED) {
bat.writeUInt32BE(currentVhdPositionSector, vhdTableIndex * 4)
currentVhdPositionSector +=
(bitmapSize + VHD_BLOCK_SIZE_BYTES) / SECTOR_SIZE
}
}
)
return [currentVhdPositionSector, lastFragmentPerBlock]
}
/**
* Receives an iterator of constant sized fragments, and a list of their address in virtual space, and returns
* a stream representing the VHD file of this disk.
* The fragment size should be an integer divider of the VHD block size.
* "fragment" designate a chunk of incoming data (ie probably a VMDK grain), and "block" is a VHD block.
* @param diskSize
* @param fragmentSize
* @param fragmentLogicalAddressList
* @param fragmentIterator
* @returns {Promise<Function>}
*/
export default async function createReadableStream(
diskSize,
incomingBlockSize,
blockAddressList,
blockIterator
fragmentSize,
fragmentLogicalAddressList,
fragmentIterator
) {
const ratio = VHD_BLOCK_SIZE_BYTES / incomingBlockSize
const ratio = VHD_BLOCK_SIZE_BYTES / fragmentSize
if (ratio % 1 !== 0) {
throw new Error(
`Can't import file, grain size (${incomingBlockSize}) is not a divider of VHD block size ${VHD_BLOCK_SIZE_BYTES}`
`Can't import file, grain size (${fragmentSize}) is not a divider of VHD block size ${VHD_BLOCK_SIZE_BYTES}`
)
}
if (ratio > 53) {
@@ -80,60 +108,72 @@ export default async function createReadableStream(
const bitmapSize =
Math.ceil(VHD_BLOCK_SIZE_SECTORS / 8 / SECTOR_SIZE) * SECTOR_SIZE
const bat = Buffer.alloc(tablePhysicalSizeBytes, 0xff)
const endOfData = createBAT(
const [endOfData, lastFragmentPerBlock] = createBAT(
firstBlockPosition,
blockAddressList,
fragmentLogicalAddressList,
ratio,
bat,
bitmapSize
)
const fileSize = endOfData * SECTOR_SIZE + FOOTER_SIZE
let position = 0
function* yieldAndTrack(buffer, expectedPosition) {
function* yieldAndTrack(buffer, expectedPosition, reason) {
if (expectedPosition !== undefined) {
assert.strictEqual(position, expectedPosition)
assert.strictEqual(position, expectedPosition, reason)
}
if (buffer.length > 0) {
yield buffer
position += buffer.length
}
}
async function* generateFileContent(blockIterator, bitmapSize, ratio) {
let currentBlock = -1
let currentVhdBlockIndex = -1
let currentBlockWithBitmap = Buffer.alloc(0)
for await (const next of blockIterator) {
currentBlock++
assert.strictEqual(blockAddressList[currentBlock], next.offsetBytes)
const batIndex = Math.floor(next.offsetBytes / VHD_BLOCK_SIZE_BYTES)
if (batIndex !== currentVhdBlockIndex) {
if (currentVhdBlockIndex >= 0) {
yield* yieldAndTrack(
currentBlockWithBitmap,
bat.readUInt32BE(currentVhdBlockIndex * 4) * SECTOR_SIZE
)
}
currentBlockWithBitmap = Buffer.alloc(bitmapSize + VHD_BLOCK_SIZE_BYTES)
currentVhdBlockIndex = batIndex
}
const blockOffset =
(next.offsetBytes / SECTOR_SIZE) % VHD_BLOCK_SIZE_SECTORS
for (let bitPos = 0; bitPos < VHD_BLOCK_SIZE_SECTORS / ratio; bitPos++) {
setBitmap(currentBlockWithBitmap, blockOffset + bitPos)
}
next.data.copy(
currentBlockWithBitmap,
bitmapSize + (next.offsetBytes % VHD_BLOCK_SIZE_BYTES)
)
function insertFragmentInBlock(fragment, blockWithBitmap) {
const fragmentOffsetInBlock =
(fragment.logicalAddressBytes / SECTOR_SIZE) % VHD_BLOCK_SIZE_SECTORS
for (let bitPos = 0; bitPos < VHD_BLOCK_SIZE_SECTORS / ratio; bitPos++) {
setBitmap(blockWithBitmap, fragmentOffsetInBlock + bitPos)
}
fragment.data.copy(
blockWithBitmap,
bitmapSize + (fragment.logicalAddressBytes % VHD_BLOCK_SIZE_BYTES)
)
}
async function* generateBlocks(fragmentIterator, bitmapSize) {
let currentFragmentIndex = -1
// store blocks waiting for some of their fragments.
const batIndexToBlockMap = new Map()
for await (const fragment of fragmentIterator) {
currentFragmentIndex++
const batIndex = Math.floor(
fragment.logicalAddressBytes / VHD_BLOCK_SIZE_BYTES
)
let currentBlockWithBitmap = batIndexToBlockMap.get(batIndex)
if (currentBlockWithBitmap === undefined) {
currentBlockWithBitmap = Buffer.alloc(bitmapSize + VHD_BLOCK_SIZE_BYTES)
batIndexToBlockMap.set(batIndex, currentBlockWithBitmap)
}
insertFragmentInBlock(fragment, currentBlockWithBitmap)
const batEntry = bat.readUInt32BE(batIndex * 4)
assert.notStrictEqual(batEntry, BLOCK_UNUSED)
const batPosition = batEntry * SECTOR_SIZE
if (lastFragmentPerBlock.get(batIndex) === fragment.logicalAddressBytes) {
batIndexToBlockMap.delete(batIndex)
yield* yieldAndTrack(
currentBlockWithBitmap,
batPosition,
`VHD block start index: ${currentFragmentIndex}`
)
}
}
yield* yieldAndTrack(currentBlockWithBitmap)
}
async function* iterator() {
yield* yieldAndTrack(footer, 0)
yield* yieldAndTrack(header, FOOTER_SIZE)
yield* yieldAndTrack(bat, FOOTER_SIZE + HEADER_SIZE)
yield* generateFileContent(blockIterator, bitmapSize, ratio)
yield* generateBlocks(fragmentIterator, bitmapSize)
yield* yieldAndTrack(footer)
}

View File

@@ -6,11 +6,8 @@ export { default as chainVhd } from './chain'
export { default as checkVhdChain } from './checkChain'
export { default as createContentStream } from './createContentStream'
export { default as createReadableRawStream } from './createReadableRawStream'
export {
default as createReadableSparseStream,
} from './createReadableSparseStream'
export { default as createReadableSparseStream } from './createReadableSparseStream'
export { default as createSyntheticStream } from './createSyntheticStream'
export { default as mergeVhd } from './merge'
export {
default as createVhdStreamWithLength,
} from './createVhdStreamWithLength'
export { default as createVhdStreamWithLength } from './createVhdStreamWithLength'
export { default as peekFooterFromVhdStream } from './peekFooterFromVhdStream'

View File

@@ -0,0 +1,10 @@
import readChunk from './_readChunk'
import { FOOTER_SIZE } from './_constants'
import { fuFooter } from './_structs'
export default async function peekFooterFromStream(stream) {
const footerBuffer = await readChunk(stream, FOOTER_SIZE)
const footer = fuFooter.unpack(footerBuffer)
stream.unshift(footerBuffer)
return footer
}

View File

@@ -38,9 +38,11 @@ const sectorsToBytes = sectors => sectors * SECTOR_SIZE
const assertChecksum = (name, buf, struct) => {
const actual = unpackField(struct.fields.checksum, buf)
const expected = checksumStruct(buf, struct)
if (actual !== expected) {
throw new Error(`invalid ${name} checksum ${actual}, expected ${expected}`)
}
assert.strictEqual(
actual,
expected,
`invalid ${name} checksum ${actual}, expected ${expected}`
)
}
// unused block as buffer containing a uint32BE
@@ -100,7 +102,7 @@ export default class Vhd {
}
// Returns the first address after metadata. (In bytes)
getEndOfHeaders() {
_getEndOfHeaders() {
const { header } = this
let end = FOOTER_SIZE + HEADER_SIZE
@@ -125,8 +127,8 @@ export default class Vhd {
}
// Returns the first sector after data.
getEndOfData() {
let end = Math.ceil(this.getEndOfHeaders() / SECTOR_SIZE)
_getEndOfData() {
let end = Math.ceil(this._getEndOfHeaders() / SECTOR_SIZE)
const fullBlockSize = this.sectorsOfBitmap + this.sectorsPerBlock
const { maxTableEntries } = this.header
@@ -307,8 +309,8 @@ export default class Vhd {
// Make a new empty block at vhd end.
// Update block allocation table in context and in file.
async createBlock(blockId) {
const blockAddr = Math.ceil(this.getEndOfData() / SECTOR_SIZE)
async _createBlock(blockId) {
const blockAddr = Math.ceil(this._getEndOfData() / SECTOR_SIZE)
debug(`create block ${blockId} at ${blockAddr}`)
@@ -323,7 +325,7 @@ export default class Vhd {
}
// Write a bitmap at a block address.
async writeBlockBitmap(blockAddr, bitmap) {
async _writeBlockBitmap(blockAddr, bitmap) {
const { bitmapSize } = this
if (bitmap.length !== bitmapSize) {
@@ -340,20 +342,20 @@ export default class Vhd {
await this._write(bitmap, sectorsToBytes(blockAddr))
}
async writeEntireBlock(block) {
async _writeEntireBlock(block) {
let blockAddr = this._getBatEntry(block.id)
if (blockAddr === BLOCK_UNUSED) {
blockAddr = await this.createBlock(block.id)
blockAddr = await this._createBlock(block.id)
}
await this._write(block.buffer, sectorsToBytes(blockAddr))
}
async writeBlockSectors(block, beginSectorId, endSectorId, parentBitmap) {
async _writeBlockSectors(block, beginSectorId, endSectorId, parentBitmap) {
let blockAddr = this._getBatEntry(block.id)
if (blockAddr === BLOCK_UNUSED) {
blockAddr = await this.createBlock(block.id)
blockAddr = await this._createBlock(block.id)
parentBitmap = Buffer.alloc(this.bitmapSize, 0)
} else if (parentBitmap === undefined) {
parentBitmap = (await this._readBlock(block.id, true)).bitmap
@@ -362,14 +364,14 @@ export default class Vhd {
const offset = blockAddr + this.sectorsOfBitmap + beginSectorId
debug(
`writeBlockSectors at ${offset} block=${block.id}, sectors=${beginSectorId}...${endSectorId}`
`_writeBlockSectors at ${offset} block=${block.id}, sectors=${beginSectorId}...${endSectorId}`
)
for (let i = beginSectorId; i < endSectorId; ++i) {
mapSetBit(parentBitmap, i)
}
await this.writeBlockBitmap(blockAddr, parentBitmap)
await this._writeBlockBitmap(blockAddr, parentBitmap)
await this._write(
block.data.slice(
sectorsToBytes(beginSectorId),
@@ -405,12 +407,12 @@ export default class Vhd {
const isFullBlock = i === 0 && endSector === sectorsPerBlock
if (isFullBlock) {
await this.writeEntireBlock(block)
await this._writeEntireBlock(block)
} else {
if (parentBitmap === null) {
parentBitmap = (await this._readBlock(blockId, true)).bitmap
}
await this.writeBlockSectors(block, i, endSector, parentBitmap)
await this._writeBlockSectors(block, i, endSector, parentBitmap)
}
i = endSector
@@ -427,7 +429,7 @@ export default class Vhd {
const rawFooter = fuFooter.pack(footer)
const eof = await this._handler.getSize(this._path)
// sometimes the file is longer than anticipated, we still need to put the footer at the end
const offset = Math.max(this.getEndOfData(), eof - rawFooter.length)
const offset = Math.max(this._getEndOfData(), eof - rawFooter.length)
footer.checksum = checksumStruct(rawFooter, fuFooter)
debug(
@@ -498,7 +500,7 @@ export default class Vhd {
endInBuffer
)
}
await this.writeBlockSectors(
await this._writeBlockSectors(
{ id: currentBlock, data: inputBuffer },
offsetInBlockSectors,
endInBlockSectors
@@ -507,7 +509,7 @@ export default class Vhd {
await this.writeFooter()
}
async ensureSpaceForParentLocators(neededSectors) {
async _ensureSpaceForParentLocators(neededSectors) {
const firstLocatorOffset = FOOTER_SIZE + HEADER_SIZE
const currentSpace =
Math.floor(this.header.tableOffset / SECTOR_SIZE) -
@@ -526,7 +528,7 @@ export default class Vhd {
header.parentLocatorEntry[0].platformCode = PLATFORM_W2KU
const encodedFilename = Buffer.from(fileNameString, 'utf16le')
const dataSpaceSectors = Math.ceil(encodedFilename.length / SECTOR_SIZE)
const position = await this.ensureSpaceForParentLocators(dataSpaceSectors)
const position = await this._ensureSpaceForParentLocators(dataSpaceSectors)
await this._write(encodedFilename, position)
header.parentLocatorEntry[0].platformDataSpace =
dataSpaceSectors * SECTOR_SIZE

View File

@@ -31,11 +31,11 @@ test('createFooter() does not crash', () => {
test('ReadableRawVHDStream does not crash', async () => {
const data = [
{
offsetBytes: 100,
logicalAddressBytes: 100,
data: Buffer.from('azerzaerazeraze', 'ascii'),
},
{
offsetBytes: 700,
logicalAddressBytes: 700,
data: Buffer.from('gdfslkdfguer', 'ascii'),
},
]
@@ -62,11 +62,11 @@ test('ReadableRawVHDStream does not crash', async () => {
test('ReadableRawVHDStream detects when blocks are out of order', async () => {
const data = [
{
offsetBytes: 700,
logicalAddressBytes: 700,
data: Buffer.from('azerzaerazeraze', 'ascii'),
},
{
offsetBytes: 100,
logicalAddressBytes: 100,
data: Buffer.from('gdfslkdfguer', 'ascii'),
},
]
@@ -97,11 +97,11 @@ test('ReadableSparseVHDStream can handle a sparse file', async () => {
const blockSize = Math.pow(2, 16)
const blocks = [
{
offsetBytes: blockSize * 3,
logicalAddressBytes: blockSize * 3,
data: Buffer.alloc(blockSize, 'azerzaerazeraze', 'ascii'),
},
{
offsetBytes: blockSize * 100,
logicalAddressBytes: blockSize * 100,
data: Buffer.alloc(blockSize, 'gdfslkdfguer', 'ascii'),
},
]
@@ -109,7 +109,7 @@ test('ReadableSparseVHDStream can handle a sparse file', async () => {
const stream = await createReadableSparseStream(
fileSize,
blockSize,
blocks.map(b => b.offsetBytes),
blocks.map(b => b.logicalAddressBytes),
blocks
)
expect(stream.length).toEqual(4197888)
@@ -128,7 +128,7 @@ test('ReadableSparseVHDStream can handle a sparse file', async () => {
const out1 = await readFile(`${tempDir}/out1.raw`)
const expected = Buffer.alloc(fileSize)
blocks.forEach(b => {
b.data.copy(expected, b.offsetBytes)
b.data.copy(expected, b.logicalAddressBytes)
})
await expect(out1.slice(0, expected.length)).toEqual(expected)
})

View File

@@ -36,12 +36,12 @@
},
"dependencies": {
"archy": "^1.0.0",
"chalk": "^2.3.2",
"chalk": "^3.0.0",
"exec-promise": "^0.7.0",
"human-format": "^0.10.0",
"lodash": "^4.17.4",
"pw": "^0.0.4",
"xen-api": "^0.27.2"
"xen-api": "^0.27.3"
},
"devDependencies": {
"@babel/cli": "^7.1.5",

View File

@@ -4,6 +4,7 @@ process.env.DEBUG = '*'
const defer = require('golike-defer').default
const { CancelToken } = require('promise-toolbox')
const { createVhdStreamWithLength } = require('vhd-lib')
const { createClient } = require('../')
@@ -32,8 +33,13 @@ defer(async ($defer, args) => {
const { cancel, token } = CancelToken.source()
process.on('SIGINT', cancel)
let input = createInputStream(args[2])
if (!raw && input.length === undefined) {
input = await createVhdStreamWithLength(input)
}
// https://xapi-project.github.io/xen-api/snapshots.html#uploading-a-disk-or-snapshot
await xapi.putResource(token, createInputStream(args[2]), '/import_raw_vdi/', {
await xapi.putResource(token, input, '/import_raw_vdi/', {
query: {
format: raw ? 'raw' : 'vhd',
vdi: await resolveRef(xapi, 'VDI', args[1])

View File

@@ -2,6 +2,28 @@
"requires": true,
"lockfileVersion": 1,
"dependencies": {
"@xen-orchestra/log": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/@xen-orchestra/log/-/log-0.2.0.tgz",
"integrity": "sha512-xNseJ/TIUdASm9uxr0zVvg8qDG+Xw6ycJy4dag+e1yl6pEr77GdPJD2R0JbE1BbZwup/Skh3TEh6L0GV+9NRdQ==",
"requires": {
"lodash": "^4.17.4",
"promise-toolbox": "^0.13.0"
}
},
"async-iterator-to-stream": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/async-iterator-to-stream/-/async-iterator-to-stream-1.1.0.tgz",
"integrity": "sha512-ddF3u7ipixenFJsYCKqVR9tNdkIzd2j7JVg8QarqkfUl7UTR7nhJgc1Q+3ebP/5DNFhV9Co9F47FJjGpdc0PjQ==",
"requires": {
"readable-stream": "^3.0.5"
}
},
"core-js": {
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/core-js/-/core-js-3.4.1.tgz",
"integrity": "sha512-KX/dnuY/J8FtEwbnrzmAjUYgLqtk+cxM86hfG60LGiW3MmltIc2yAmDgBgEkfm0blZhUrdr1Zd84J2Y14mLxzg=="
},
"core-util-is": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz",
@@ -24,6 +46,41 @@
"node-gyp-build": "^3.7.0"
}
},
"from2": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/from2/-/from2-2.3.0.tgz",
"integrity": "sha1-i/tVAr3kpNNs/e6gB/zKIdfjgq8=",
"requires": {
"inherits": "^2.0.1",
"readable-stream": "^2.0.0"
},
"dependencies": {
"readable-stream": {
"version": "2.3.6",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.6.tgz",
"integrity": "sha512-tQtKA9WIAhBF3+VLAseyMqZeBjW0AHJoxOtYqSUZNJxauErmLbVm2FW1y+J/YA9dUrAC39ITejlZWhVIwawkKw==",
"requires": {
"core-util-is": "~1.0.0",
"inherits": "~2.0.3",
"isarray": "~1.0.0",
"process-nextick-args": "~2.0.0",
"safe-buffer": "~5.1.1",
"string_decoder": "~1.1.1",
"util-deprecate": "~1.0.1"
}
}
}
},
"fs-extra": {
"version": "8.1.0",
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
"integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==",
"requires": {
"graceful-fs": "^4.2.0",
"jsonfile": "^4.0.0",
"universalify": "^0.1.0"
}
},
"getopts": {
"version": "2.2.5",
"resolved": "https://registry.npmjs.org/getopts/-/getopts-2.2.5.tgz",
@@ -34,6 +91,11 @@
"resolved": "https://registry.npmjs.org/golike-defer/-/golike-defer-0.4.1.tgz",
"integrity": "sha512-x8cq/Fvu32T8cnco3CBDRF+/M2LFmfSIysKfecX09uIK3cFdHcEKBTPlPnEO6lwrdxfjkOIU6dIw3EIlEJeS1A=="
},
"graceful-fs": {
"version": "4.2.3",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.3.tgz",
"integrity": "sha512-a30VEBm4PEdx1dRB7MFK7BejejvCvBronbLjht+sHuGYj8PHs7M/5Z+rt5lw551vZ7yfTCj4Vuyy3mSJytDWRQ=="
},
"human-format": {
"version": "0.10.1",
"resolved": "https://registry.npmjs.org/human-format/-/human-format-0.10.1.tgz",
@@ -49,6 +111,24 @@
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
"integrity": "sha1-u5NdSFgsuhaMBoNJV6VKPgcSTxE="
},
"jsonfile": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-4.0.0.tgz",
"integrity": "sha1-h3Gq4HmbZAdrdmQPygWPnBDjPss=",
"requires": {
"graceful-fs": "^4.1.6"
}
},
"limit-concurrency-decorator": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/limit-concurrency-decorator/-/limit-concurrency-decorator-0.4.0.tgz",
"integrity": "sha512-hXGTuCkYjosfHT1D7dcPKzPHSGwBtZfN0wummzDwxi5A3ZUNBB75qM8phKEjQGlQGAfYrMW/JqhbaljO3xOH0A=="
},
"lodash": {
"version": "4.17.15",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
"integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
},
"make-error": {
"version": "1.3.5",
"resolved": "https://registry.npmjs.org/make-error/-/make-error-1.3.5.tgz",
@@ -141,6 +221,11 @@
"safe-buffer": "~5.1.0"
}
},
"struct-fu": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/struct-fu/-/struct-fu-1.2.1.tgz",
"integrity": "sha512-QrtfoBRe+RixlBJl852/Gu7tLLTdx3kWs3MFzY1OHNrSsYYK7aIAnzqsncYRWrKGG/QSItDmOTlELMxehw4Gjw=="
},
"throttle": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/throttle/-/throttle-1.0.3.tgz",
@@ -175,11 +260,47 @@
}
}
},
"universalify": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/universalify/-/universalify-0.1.2.tgz",
"integrity": "sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="
},
"util-deprecate": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
"integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8="
},
"uuid": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-3.3.3.tgz",
"integrity": "sha512-pW0No1RGHgzlpHJO1nsVrHKpOEIxkGg1xB+v0ZmdNH5OAeAwzAVrCnI2/6Mtx+Uys6iaylxa+D3g4j63IKKjSQ=="
},
"vhd-lib": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/vhd-lib/-/vhd-lib-0.7.1.tgz",
"integrity": "sha512-TODzo7KjtNzYF/NuJjE5bPeGyXZIUzAOVJvED1dcPXr8iSnS6/U5aNdtKahBVwukEzf0/x+Cu3GMYutV4/cxsQ==",
"requires": {
"@xen-orchestra/log": "^0.2.0",
"async-iterator-to-stream": "^1.0.2",
"core-js": "^3.0.0",
"from2": "^2.3.0",
"fs-extra": "^8.0.1",
"limit-concurrency-decorator": "^0.4.0",
"promise-toolbox": "^0.14.0",
"struct-fu": "^1.2.0",
"uuid": "^3.0.1"
},
"dependencies": {
"promise-toolbox": {
"version": "0.14.0",
"resolved": "https://registry.npmjs.org/promise-toolbox/-/promise-toolbox-0.14.0.tgz",
"integrity": "sha512-VV5lXK4lXaPB9oBO50ope1qd0AKN8N3nK14jYvV9/qFmfZW2Px/bJjPZBniGjXcIJf6J5Y/coNgJtPHDyiUV/g==",
"requires": {
"make-error": "^1.3.2"
}
}
}
},
"xtend": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz",

View File

@@ -7,6 +7,7 @@
"progress-stream": "^2.0.0",
"promise-toolbox": "^0.13.0",
"readable-stream": "^3.1.1",
"throttle": "^1.0.3"
"throttle": "^1.0.3",
"vhd-lib": "^0.7.2"
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "xen-api",
"version": "0.27.2",
"version": "0.27.3",
"license": "ISC",
"description": "Connector to the Xen API",
"keywords": [

View File

@@ -25,7 +25,6 @@ import isReadOnlyCall from './_isReadOnlyCall'
import makeCallSetting from './_makeCallSetting'
import parseUrl from './_parseUrl'
import replaceSensitiveValues from './_replaceSensitiveValues'
import XapiError from './_XapiError'
// ===================================================================
@@ -626,9 +625,7 @@ export class Xapi extends EventEmitter {
kindOf(result)
)
return result
} catch (e) {
const error = e instanceof Error ? e : XapiError.wrap(e)
} catch (error) {
// do not log the session ID
//
// TODO: should log at the session level to avoid logging sensitive
@@ -743,9 +740,9 @@ export class Xapi extends EventEmitter {
// the event loop in that case
if (this._pool.$ref !== oldPoolRef) {
// Uses introspection to list available types.
const types = (this._types = (await this._interruptOnDisconnect(
this._call('system.listMethods')
))
const types = (this._types = (
await this._interruptOnDisconnect(this._call('system.listMethods'))
)
.filter(isGetAllRecordsMethod)
.map(method => method.slice(0, method.indexOf('.'))))
this._lcToTypes = { __proto__: null }

View File

@@ -1,6 +1,8 @@
import httpRequestPlus from 'http-request-plus'
import { format, parse } from 'json-rpc-protocol'
import XapiError from '../_XapiError'
import UnsupportedTransport from './_UnsupportedTransport'
// https://github.com/xenserver/xenadmin/blob/0df39a9d83cd82713f32d24704852a0fd57b8a64/XenModel/XenAPI/Session.cs#L403-L433
@@ -30,7 +32,7 @@ export default ({ allowUnauthorized, url }) => {
return response.result
}
throw response.error
throw XapiError.wrap(response.error)
},
error => {
if (error.response !== undefined) {

View File

@@ -1,6 +1,8 @@
import { createClient, createSecureClient } from 'xmlrpc'
import { promisify } from 'promise-toolbox'
import XapiError from '../_XapiError'
import prepareXmlRpcParams from './_prepareXmlRpcParams'
import UnsupportedTransport from './_UnsupportedTransport'
@@ -33,7 +35,7 @@ const parseResult = result => {
}
if (status !== 'Success') {
throw result.ErrorDescription
throw XapiError.wrap(result.ErrorDescription)
}
const value = result.Value

View File

@@ -1,6 +1,8 @@
import { createClient, createSecureClient } from 'xmlrpc'
import { promisify } from 'promise-toolbox'
import XapiError from '../_XapiError'
import prepareXmlRpcParams from './_prepareXmlRpcParams'
const logError = error => {
@@ -26,7 +28,7 @@ const parseResult = result => {
}
if (status !== 'Success') {
throw result.ErrorDescription
throw XapiError.wrap(result.ErrorDescription)
}
return result.Value

View File

@@ -26,12 +26,12 @@
"dist/"
],
"engines": {
"node": ">=6"
"node": ">=8.10"
},
"dependencies": {
"@babel/polyfill": "^7.0.0",
"bluebird": "^3.5.1",
"chalk": "^2.2.0",
"chalk": "^3.0.0",
"exec-promise": "^0.7.0",
"fs-promise": "^2.0.3",
"http-request-plus": "^0.8.0",

View File

@@ -386,7 +386,7 @@ async function call(args) {
printProgress
)
return fromCallback(cb => pump(response, progress, output, cb))
return fromCallback(pump, response, progress, output)
}
if (key === '$sendTo') {

View File

@@ -260,7 +260,10 @@ describe('Collection', function() {
forEach(
{
'add & update → add': [
[['add', 'foo', 0], ['update', 'foo', 1]],
[
['add', 'foo', 0],
['update', 'foo', 1],
],
{
add: {
foo: 1,
@@ -268,10 +271,19 @@ describe('Collection', function() {
},
],
'add & remove → ∅': [[['add', 'foo', 0], ['remove', 'foo']], {}],
'add & remove → ∅': [
[
['add', 'foo', 0],
['remove', 'foo'],
],
{},
],
'update & update → update': [
[['update', 'bar', 1], ['update', 'bar', 2]],
[
['update', 'bar', 1],
['update', 'bar', 2],
],
{
update: {
bar: 2,
@@ -280,7 +292,10 @@ describe('Collection', function() {
],
'update & remove → remove': [
[['update', 'bar', 1], ['remove', 'bar']],
[
['update', 'bar', 1],
['remove', 'bar'],
],
{
remove: {
bar: undefined,
@@ -289,7 +304,10 @@ describe('Collection', function() {
],
'remove & add → update': [
[['remove', 'bar'], ['add', 'bar', 0]],
[
['remove', 'bar'],
['add', 'bar', 0],
],
{
update: {
bar: 0,

View File

@@ -1,4 +1,4 @@
import { bind, iteratee } from 'lodash'
import iteratee from 'lodash/iteratee'
import clearObject from './clear-object'
import isEmpty from './is-empty'
@@ -17,9 +17,9 @@ export default class Index {
this._keysToHash = Object.create(null)
// Bound versions of listeners.
this._onAdd = bind(this._onAdd, this)
this._onUpdate = bind(this._onUpdate, this)
this._onRemove = bind(this._onRemove, this)
this._onAdd = this._onAdd.bind(this)
this._onUpdate = this._onUpdate.bind(this)
this._onRemove = this._onRemove.bind(this)
}
// This method is used to compute the hash under which an item must

View File

@@ -1,4 +1,4 @@
import { bind, iteratee } from 'lodash'
import iteratee from 'lodash/iteratee'
import clearObject from './clear-object'
import NotImplemented from './not-implemented'
@@ -16,9 +16,9 @@ export default class UniqueIndex {
this._keysToHash = Object.create(null)
// Bound versions of listeners.
this._onAdd = bind(this._onAdd, this)
this._onUpdate = bind(this._onUpdate, this)
this._onRemove = bind(this._onRemove, this)
this._onAdd = this._onAdd.bind(this)
this._onUpdate = this._onUpdate.bind(this)
this._onRemove = this._onRemove.bind(this)
}
// This method is used to compute the hash under which an item must

View File

@@ -1,4 +1,5 @@
import { bind, forEach, iteratee as createCallback } from 'lodash'
import createCallback from 'lodash/iteratee'
import forEach from 'lodash/forEach'
import Collection, {
ACTION_ADD,
@@ -19,9 +20,9 @@ export default class View extends Collection {
this._onAdd(this._collection.all)
// Bound versions of listeners.
this._onAdd = bind(this._onAdd, this)
this._onUpdate = bind(this._onUpdate, this)
this._onRemove = bind(this._onRemove, this)
this._onAdd = this._onAdd.bind(this)
this._onUpdate = this._onUpdate.bind(this)
this._onRemove = this._onRemove.bind(this)
// Register listeners.
this._collection.on(ACTION_ADD, this._onAdd)

View File

@@ -46,7 +46,7 @@
"@types/node": "^12.0.2",
"@types/through2": "^2.0.31",
"tslint": "^5.9.1",
"tslint-config-standard": "^8.0.1",
"tslint-config-standard": "^9.0.0",
"typescript": "^3.1.6"
},
"scripts": {

View File

@@ -1,13 +1,13 @@
# ${pkg.name} [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
# xo-remote-parser [![Build Status](https://travis-ci.org/${pkg.shortGitHubPath}.png?branch=master)](https://travis-ci.org/${pkg.shortGitHubPath})
> ${pkg.description}
## Install
Installation of the [npm package](https://npmjs.org/package/${pkg.name}):
Installation of the [npm package](https://npmjs.org/package/xo-remote-parser):
```
> npm install --save ${pkg.name}
> npm install --save xo-remote-parser
```
## Usage
@@ -40,10 +40,10 @@ the code.
You may:
- report any [issue](${pkg.bugs})
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
${pkg.license} © [${pkg.author.name}](${pkg.author.url})
AGPL-3.0 © [Vates SAS](https://vates.fr)

View File

@@ -1,6 +1,6 @@
{
"name": "xo-server-auth-ldap",
"version": "0.6.5",
"version": "0.6.6",
"license": "AGPL-3.0",
"description": "LDAP authentication plugin for XO-Server",
"keywords": [

View File

@@ -1,7 +1,7 @@
/* eslint no-throw-literal: 0 */
import eventToPromise from 'event-to-promise'
import { bind, noop } from 'lodash'
import noop from 'lodash/noop'
import { createClient } from 'ldapjs'
import { escape } from 'ldapjs/lib/filters/escape'
import { promisify } from 'promise-toolbox'
@@ -9,6 +9,11 @@ import { readFile } from 'fs'
// ===================================================================
const DEFAULTS = {
checkCertificate: true,
filter: '(uid={{name}})',
}
const VAR_RE = /\{\{([^}]+)\}\}/g
const evalFilter = (filter, vars) =>
filter.replace(VAR_RE, (_, name) => {
@@ -43,7 +48,7 @@ If not specified, it will use a default set of well-known CAs.
description:
"Enforce the validity of the server's certificates. You can disable it when connecting to servers that use a self-signed certificate.",
type: 'boolean',
default: true,
defaults: DEFAULTS.checkCertificate,
},
bind: {
description: 'Credentials to use before looking for the user record.',
@@ -76,6 +81,11 @@ For Microsoft Active Directory, it can also be \`<user>@<domain>\`.
description: `
Filter used to find the user.
For LDAP if you want to filter for a special group you can try
something like:
- \`(&(uid={{name}})(memberOf=<group DN>))\`
For Microsoft Active Directory, you can try one of the following filters:
- \`(cn={{name}})\`
@@ -83,13 +93,12 @@ For Microsoft Active Directory, you can try one of the following filters:
- \`(sAMAccountName={{name}}@<domain>)\` (replace \`<domain>\` by your own domain)
- \`(userPrincipalName={{name}})\`
For LDAP if you want to filter for a special group you can try
something like:
Or something like this if you also want to filter by group:
- \`(&(uid={{name}})(memberOf=<group DN>))\`
- \`(&(sAMAccountName={{name}})(memberOf=<group DN>))\`
`.trim(),
type: 'string',
default: '(uid={{name}})',
default: DEFAULTS.filter,
},
},
required: ['uri', 'base'],
@@ -116,7 +125,7 @@ class AuthLdap {
constructor(xo) {
this._xo = xo
this._authenticate = bind(this._authenticate, this)
this._authenticate = this._authenticate.bind(this)
}
async configure(conf) {
@@ -127,7 +136,11 @@ class AuthLdap {
})
{
const { bind, checkCertificate = true, certificateAuthorities } = conf
const {
bind,
checkCertificate = DEFAULTS.checkCertificate,
certificateAuthorities,
} = conf
if (bind) {
clientOpts.bindDN = bind.dn
@@ -147,7 +160,7 @@ class AuthLdap {
const {
bind: credentials,
base: searchBase,
filter: searchFilter = '(uid={{name}})',
filter: searchFilter = DEFAULTS.filter,
} = conf
this._credentials = credentials

View File

@@ -1,7 +1,6 @@
#!/usr/bin/env node
import execPromise from 'exec-promise'
import { bind } from 'lodash'
import { fromCallback } from 'promise-toolbox'
import { readFile, writeFile } from 'fs'
@@ -17,7 +16,7 @@ const CACHE_FILE = './ldap.cache.conf'
execPromise(async args => {
const config = await promptSchema(
configurationSchema,
await fromCallback(cb => readFile(CACHE_FILE, 'utf-8', cb)).then(
await fromCallback(readFile, CACHE_FILE, 'utf-8').then(
JSON.parse,
() => ({})
)
@@ -44,6 +43,6 @@ execPromise(async args => {
}),
password: await password('Password'),
},
bind(console.log, console)
console.log.bind(console)
)
})

View File

@@ -1,6 +1,6 @@
{
"name": "xo-server-auth-saml",
"version": "0.6.0",
"version": "0.7.0",
"license": "AGPL-3.0",
"description": "SAML authentication plugin for XO-Server",
"keywords": [

View File

@@ -2,6 +2,10 @@ import { Strategy } from 'passport-saml'
// ===================================================================
const DEFAULTS = {
disableRequestedAuthnContext: false,
}
export const configurationSchema = {
description:
'Important: When registering your instance to your identity provider, you must configure its callback URL to `https://<xo.company.net>/signin/saml/callback`!',
@@ -30,6 +34,11 @@ You should try \`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddr
`,
type: 'string',
},
disableRequestedAuthnContext: {
title: "Don't request an authentication context",
description: 'This is known to help when using Active Directory',
default: DEFAULTS.disableRequestedAuthnContext,
},
},
required: ['cert', 'entryPoint', 'issuer', 'usernameField'],
}
@@ -46,6 +55,7 @@ class AuthSamlXoPlugin {
configure({ usernameField, ...conf }) {
this._usernameField = usernameField
this._conf = {
...DEFAULTS,
...conf,
// must match the callback URL

View File

@@ -1,6 +1,6 @@
{
"name": "xo-server-backup-reports",
"version": "0.16.2",
"version": "0.16.4",
"license": "AGPL-3.0",
"description": "Backup reports plugin for XO-Server",
"keywords": [
@@ -36,6 +36,7 @@
"node": ">=6"
},
"dependencies": {
"@xen-orchestra/defined": "^0.0.0",
"@xen-orchestra/log": "^0.2.0",
"human-format": "^0.10.0",
"lodash": "^4.13.1",

View File

@@ -2,6 +2,7 @@ import createLogger from '@xen-orchestra/log'
import humanFormat from 'human-format'
import moment from 'moment-timezone'
import { forEach, groupBy, startCase } from 'lodash'
import { get } from '@xen-orchestra/defined'
import pkg from '../package'
const logger = createLogger('xo:xo-server-backup-reports')
@@ -186,7 +187,7 @@ const MARKDOWN_BY_TYPE = {
}
const getMarkdown = (task, props) =>
MARKDOWN_BY_TYPE[(task.data?.type)]?.(task, props)
MARKDOWN_BY_TYPE[task.data?.type]?.(task, props)
const toMarkdown = parts => {
const lines = []
@@ -317,6 +318,7 @@ class BackupReportsXoPlugin {
const taskMarkdown = await getMarkdown(task, {
formatDate,
jobName: log.jobName,
xo,
})
if (taskMarkdown === undefined) {
continue
@@ -364,9 +366,10 @@ class BackupReportsXoPlugin {
})
}
async _ngVmHandler(log, { name: jobName }, schedule, force) {
async _ngVmHandler(log, { name: jobName, settings }, schedule, force) {
const xo = this._xo
const mailReceivers = get(() => settings[''].reportRecipients)
const { reportWhen, mode } = log.data || {}
const formatDate = createDateFormatter(schedule?.timezone)
@@ -389,6 +392,7 @@ class BackupReportsXoPlugin {
subject: `[Xen Orchestra] ${
log.status
} Backup report for ${jobName} ${STATUS_ICON[log.status]}`,
mailReceivers,
markdown: toMarkdown(markdown),
success: false,
nagiosMarkdown: `[Xen Orchestra] [${log.status}] Backup report for ${jobName} - Error : ${log.result.message}`,
@@ -642,6 +646,7 @@ class BackupReportsXoPlugin {
markdown.push('---', '', `*${pkg.name} v${pkg.version}*`)
return this._sendReport({
mailReceivers,
markdown: toMarkdown(markdown),
subject: `[Xen Orchestra] ${log.status} Backup report for ${jobName} ${
STATUS_ICON[log.status]
@@ -656,12 +661,18 @@ class BackupReportsXoPlugin {
})
}
_sendReport({ markdown, subject, success, nagiosMarkdown }) {
_sendReport({
mailReceivers = this._mailsReceivers,
markdown,
nagiosMarkdown,
subject,
success,
}) {
const xo = this._xo
return Promise.all([
xo.sendEmail !== undefined &&
xo.sendEmail({
to: this._mailsReceivers,
to: mailReceivers,
subject,
markdown,
}),

View File

@@ -31,7 +31,7 @@
"node": ">=6"
},
"dependencies": {
"@xen-orchestra/cron": "^1.0.5",
"@xen-orchestra/cron": "^1.0.6",
"lodash": "^4.16.2"
},
"devDependencies": {

View File

@@ -21,7 +21,7 @@
"node": ">=6"
},
"dependencies": {
"@xen-orchestra/cron": "^1.0.5",
"@xen-orchestra/cron": "^1.0.6",
"d3-time-format": "^2.1.1",
"json5": "^2.0.1",
"lodash": "^4.17.4"

View File

@@ -17,7 +17,7 @@
},
"version": "0.3.1",
"engines": {
"node": ">=6"
"node": ">=8.10"
},
"devDependencies": {
"@babel/cli": "^7.4.4",
@@ -30,7 +30,7 @@
"dependencies": {
"@xen-orchestra/log": "^0.2.0",
"lodash": "^4.17.11",
"node-openssl-cert": "^0.0.98",
"node-openssl-cert": "^0.0.101",
"promise-toolbox": "^0.14.0",
"uuid": "^3.3.2"
},

View File

@@ -8,5 +8,8 @@
"directory": "packages/xo-server-test-plugin",
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"engines": {
"node": "*"
}
}

View File

@@ -60,14 +60,16 @@ describe('server', () => {
autoConnect: false,
})
expect(
(await rejectionOf(
addServer({
host: 'xen1.example.org',
username: 'root',
password: 'password',
autoConnect: false,
})
)).message
(
await rejectionOf(
addServer({
host: 'xen1.example.org',
username: 'root',
password: 'password',
autoConnect: false,
})
)
).message
).toBe('unknown error from the peer')
})

View File

@@ -60,13 +60,15 @@ describe('cd', () => {
await getOrWaitCdVbdPosition(vmId)
expect(
(await rejectionOf(
xo.call('vm.insertCd', {
id: vmId,
cd_id: config.ubuntuIsoId,
force: false,
})
)).message
(
await rejectionOf(
xo.call('vm.insertCd', {
id: vmId,
cd_id: config.ubuntuIsoId,
force: false,
})
)
).message
).toBe('unknown error from the peer')
})

View File

@@ -126,12 +126,14 @@ describe('the VM life cyle', () => {
})
expect(
(await rejectionOf(
xo.call('vm.restart', {
id: hvmWithoutToolsId,
force: false,
})
)).message
(
await rejectionOf(
xo.call('vm.restart', {
id: hvmWithoutToolsId,
force: false,
})
)
).message
).toBe('VM lacks feature shutdown')
})
@@ -196,12 +198,14 @@ describe('the VM life cyle', () => {
})
expect(
(await rejectionOf(
xo.call('vm.stop', {
id: hvmWithoutToolsId,
force: false,
})
)).message
(
await rejectionOf(
xo.call('vm.stop', {
id: hvmWithoutToolsId,
force: false,
})
)
).message
).toBe('clean shutdown requires PV drivers')
})

View File

@@ -35,11 +35,6 @@ export const configurationSchema = {
// ===================================================================
const bind = (fn, thisArg) =>
function __bound__() {
return fn.apply(thisArg, arguments)
}
function nscaPacketBuilder({ host, iv, message, service, status, timestamp }) {
// Building NSCA packet
const SIZE = 720
@@ -82,8 +77,8 @@ const ENCODING = 'binary'
class XoServerNagios {
constructor({ xo }) {
this._sendPassiveCheck = bind(this._sendPassiveCheck, this)
this._set = bind(xo.defineProperty, xo)
this._sendPassiveCheck = this._sendPassiveCheck.bind(this)
this._set = xo.defineProperty.bind(xo)
this._unset = null
// Defined in configure().

View File

@@ -36,7 +36,7 @@
},
"dependencies": {
"@xen-orchestra/async-map": "^0.0.0",
"@xen-orchestra/cron": "^1.0.5",
"@xen-orchestra/cron": "^1.0.6",
"@xen-orchestra/log": "^0.2.0",
"handlebars": "^4.0.6",
"html-minifier": "^4.0.0",

View File

@@ -0,0 +1,3 @@
module.exports = require('../../@xen-orchestra/babel-config')(
require('./package.json')
)

View File

@@ -0,0 +1,10 @@
/examples/
example.js
example.js.map
*.example.js
*.example.js.map
/test/
/tests/
*.spec.js
*.spec.js.map

View File

@@ -0,0 +1,40 @@
# xo-server-web-hooks [![Build Status](https://travis-ci.org/vatesfr/xen-orchestra.png?branch=master)](https://travis-ci.org/vatesfr/xen-orchestra)
## Usage
Like all other xo-server plugins, it can be configured directly via
the web interface, see [the plugin documentation](https://xen-orchestra.com/docs/plugins.html).
## Development
```
# Install dependencies
> npm install
# Run the tests
> npm test
# Continuously compile
> npm run dev
# Continuously run the tests
> npm run dev-test
# Build for production (automatically called by npm install)
> npm run build
```
## Contributions
Contributions are _very_ welcomed, either on the documentation or on
the code.
You may:
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
you've encountered;
- fork and create a pull request.
## License
AGPL3 © [Vates SAS](https://vates.fr)

View File

@@ -0,0 +1,56 @@
{
"name": "xo-server-web-hooks",
"version": "0.1.0",
"license": "AGPL-3.0",
"description": "",
"keywords": [
"hooks",
"orchestra",
"plugin",
"web",
"xen",
"xen-orchestra",
"xo-server"
],
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server-web-hooks",
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
"directory": "packages/xo-server-web-hooks",
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"author": {
"name": "Pierre Donias",
"email": "pierre.donias@gmail.com"
},
"preferGlobal": false,
"main": "dist/",
"bin": {},
"files": [
"dist/"
],
"engines": {
"node": ">=8.10"
},
"dependencies": {
"@xen-orchestra/log": "^0.2.0",
"http-request-plus": "^0.8.0",
"lodash": "^4.17.15"
},
"devDependencies": {
"@babel/cli": "^7.7.0",
"@babel/core": "^7.7.2",
"@babel/plugin-proposal-optional-chaining": "^7.6.0",
"@babel/preset-env": "^7.7.1",
"cross-env": "^6.0.3",
"rimraf": "^3.0.0"
},
"scripts": {
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"prebuild": "rimraf dist/",
"predev": "yarn run prebuild",
"prepublishOnly": "yarn run build"
},
"private": true
}

View File

@@ -0,0 +1,160 @@
import createLogger from '@xen-orchestra/log'
const log = createLogger('xo:web-hooks')
function handleHook(type, data) {
const hooks = this._hooks[data.method]?.[type]
if (hooks !== undefined) {
return Promise.all(
hooks.map(({ url }) =>
this._makeRequest(url, type, data).catch(error => {
log.error('web hook failed', {
error,
webHook: { ...data, url, type },
})
})
)
)
}
}
class XoServerHooks {
constructor({ xo }) {
this._xo = xo
// Defined in configure().
this._hooks = null
this._handlePreHook = handleHook.bind(this, 'pre')
this._handlePostHook = handleHook.bind(this, 'post')
}
_makeRequest(url, type, data) {
return this._xo.httpRequest(url, {
body: JSON.stringify({ ...data, type }),
headers: { 'Content-Type': 'application/json' },
method: 'POST',
onRequest: req => {
req.setTimeout(1e4)
req.on('timeout', req.abort)
},
})
}
configure(configuration) {
// this._hooks = {
// 'vm.start': {
// pre: [
// {
// method: 'vm.start',
// type: 'pre',
// url: 'https://my-domain.net/xo-hooks?action=vm.start'
// },
// ...
// ],
// post: [
// ...
// ]
// },
// ...
// }
const hooks = {}
for (const hook of configuration.hooks) {
if (hooks[hook.method] === undefined) {
hooks[hook.method] = {}
}
hook.type.split('/').forEach(type => {
if (hooks[hook.method][type] === undefined) {
hooks[hook.method][type] = []
}
hooks[hook.method][type].push(hook)
})
}
this._hooks = hooks
}
load() {
this._xo.on('xo:preCall', this._handlePreHook)
this._xo.on('xo:postCall', this._handlePostHook)
}
unload() {
this._xo.removeListener('xo:preCall', this._handlePreHook)
this._xo.removeListener('xo:postCall', this._handlePostHook)
}
async test({ url }) {
await this._makeRequest(url, 'pre', {
callId: '0',
userId: 'b4tm4n',
userName: 'bruce.wayne@waynecorp.com',
method: 'vm.start',
params: { id: '67aac198-0174-11ea-8d71-362b9e155667' },
timestamp: 0,
})
await this._makeRequest(url, 'post', {
callId: '0',
userId: 'b4tm4n',
userName: 'bruce.wayne@waynecorp.com',
method: 'vm.start',
result: '',
timestamp: 500,
duration: 500,
})
}
}
export const configurationSchema = ({ xo: { apiMethods } }) => ({
description: 'Bind XO API calls to HTTP requests.',
type: 'object',
properties: {
hooks: {
type: 'array',
title: 'Hooks',
items: {
type: 'object',
title: 'Hook',
properties: {
method: {
description: 'The method to be bound to',
enum: Object.keys(apiMethods).sort(),
title: 'Method',
type: 'string',
},
type: {
description:
'Right before the API call *or* right after the action has been completed',
enum: ['pre', 'post', 'pre/post'],
title: 'Type',
type: 'string',
},
url: {
description: 'The full URL you wish the request to be sent to',
// It would be more convenient to configure 1 URL for multiple
// triggers but the UI implementation is not ideal for such a deep
// configuration schema: https://i.imgur.com/CpvAwPM.png
title: 'URL',
type: 'string',
},
},
required: ['method', 'type', 'url'],
},
},
},
required: ['hooks'],
})
export const testSchema = {
type: 'object',
description:
'The test will simulate a hook on `vm.start` (both "pre" and "post" hooks)',
properties: {
url: {
title: 'URL',
type: 'string',
description: 'The URL the test request will be sent to',
},
},
}
export default opts => new XoServerHooks(opts)

View File

@@ -50,15 +50,28 @@ maxTokenValidity = '0.5 year'
# https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/Set-Cookie#Session_cookie
#sessionCookieValidity = '10 hours'
# This is the page where unauthenticated users will be redirected to.
#
# For instance, it can be changed to `/signin/saml` if that's the provider that
# should be used by default.
defaultSignInPage = '/signin'
[backup]
# Delay for which backups listing on a remote is cached
listingDebounce = '1 min'
# This is a work-around.
#
# See https://github.com/vatesfr/xen-orchestra/pull/4674
maxMergedDeltasPerRun = 2
# Duration for which we can wait for the backup size before returning
#
# It should be short to avoid blocking the display of the available backups.
vmBackupSizeTimeout = '2 seconds'
poolMetadataTimeout = '10 minutes'
# Helmet handles HTTP security via headers
#
# https://helmetjs.github.io/docs/
@@ -94,3 +107,6 @@ timeout = 600e3
# see https:#github.com/vatesfr/xen-orchestra/issues/3419
# useSudo = false
[xapiOptions]
maxUncoalescedVdis = 1

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "xo-server",
"version": "5.51.0",
"version": "5.53.0",
"license": "AGPL-3.0",
"description": "Server part of Xen-Orchestra",
"keywords": [
@@ -30,15 +30,15 @@
"bin": "bin"
},
"engines": {
"node": ">=8"
"node": ">=8.10"
},
"dependencies": {
"@iarna/toml": "^2.2.1",
"@xen-orchestra/async-map": "^0.0.0",
"@xen-orchestra/cron": "^1.0.5",
"@xen-orchestra/cron": "^1.0.6",
"@xen-orchestra/defined": "^0.0.0",
"@xen-orchestra/emit-async": "^0.0.0",
"@xen-orchestra/fs": "^0.10.1",
"@xen-orchestra/fs": "^0.10.2",
"@xen-orchestra/log": "^0.2.0",
"@xen-orchestra/mixin": "^0.0.0",
"ajv": "^6.1.1",
@@ -60,7 +60,7 @@
"deptree": "^1.0.0",
"event-to-promise": "^0.8.0",
"exec-promise": "^0.7.0",
"execa": "^2.0.5",
"execa": "^3.2.0",
"express": "^4.16.2",
"express-session": "^1.15.6",
"fatfs": "^0.10.4",
@@ -115,21 +115,22 @@
"split-lines": "^2.0.0",
"stack-chain": "^2.0.0",
"stoppable": "^1.0.5",
"strict-timeout": "^1.0.0",
"struct-fu": "^1.2.0",
"tar-stream": "^2.0.1",
"through2": "^3.0.0",
"tmp": "^0.1.0",
"uuid": "^3.0.1",
"value-matcher": "^0.2.0",
"vhd-lib": "^0.7.0",
"vhd-lib": "^0.7.2",
"ws": "^7.1.2",
"xen-api": "^0.27.2",
"xen-api": "^0.27.3",
"xml2js": "^0.4.19",
"xo-acl-resolver": "^0.4.1",
"xo-collection": "^0.4.1",
"xo-common": "^0.2.0",
"xo-remote-parser": "^0.5.0",
"xo-vmdk-to-vhd": "^0.1.7",
"xo-vmdk-to-vhd": "^0.1.8",
"yazl": "^2.4.3"
},
"devDependencies": {

View File

@@ -1,5 +1,4 @@
import fromCallback from 'promise-toolbox/fromCallback'
import { execFile } from 'child_process'
export const read = key =>
fromCallback(cb => execFile('xenstore-read', [key], cb))
export const read = key => fromCallback(execFile, 'xenstore-read', [key])

View File

@@ -1,36 +0,0 @@
import iteratee from 'lodash/iteratee'
import pDelay from 'promise-toolbox/delay'
function stopRetry(error) {
this.error = error
// eslint-disable-next-line no-throw-literal
throw this
}
// do not retry on ReferenceError and TypeError which are programmer errors
const defaultMatcher = error =>
!(error instanceof ReferenceError || error instanceof TypeError)
export default async function pRetry(
fn,
{ delay = 1e3, tries = 10, when } = {}
) {
const container = { error: undefined }
const stop = stopRetry.bind(container)
when = when === undefined ? defaultMatcher : iteratee(when)
while (true) {
try {
return await fn(stop)
} catch (error) {
if (error === container) {
throw container.error
}
if (--tries === 0 || !when(error)) {
throw error
}
}
await pDelay(delay)
}
}

View File

@@ -1,95 +0,0 @@
/* eslint-env jest */
import { forOwn } from 'lodash'
import pRetry from './_pRetry'
describe('pRetry()', () => {
it('retries until the function succeeds', async () => {
let i = 0
expect(
await pRetry(
() => {
if (++i < 3) {
throw new Error()
}
return 'foo'
},
{ delay: 0 }
)
).toBe('foo')
expect(i).toBe(3)
})
it('returns the last error', async () => {
let tries = 5
const e = new Error()
await expect(
pRetry(
() => {
throw --tries > 0 ? new Error() : e
},
{ delay: 0, tries }
)
).rejects.toBe(e)
})
;[ReferenceError, TypeError].forEach(ErrorType => {
it(`does not retry if a ${ErrorType.name} is thrown`, async () => {
let i = 0
await expect(
pRetry(() => {
++i
throw new ErrorType()
})
).rejects.toBeInstanceOf(ErrorType)
expect(i).toBe(1)
})
})
it('does not retry if `stop` callback is called', async () => {
const e = new Error()
let i = 0
await expect(
pRetry(stop => {
++i
stop(e)
})
).rejects.toBe(e)
expect(i).toBe(1)
})
describe('`when` option', () => {
forOwn(
{
'with function predicate': _ => _.message === 'foo',
'with object predicate': { message: 'foo' },
},
(when, title) =>
describe(title, () => {
it('retries when error matches', async () => {
let i = 0
await pRetry(
() => {
++i
throw new Error('foo')
},
{ when, tries: 2 }
).catch(Function.prototype)
expect(i).toBe(2)
})
it('does not retry when error does not match', async () => {
let i = 0
await pRetry(
() => {
++i
throw new Error('bar')
},
{ when, tries: 2 }
).catch(Function.prototype)
expect(i).toBe(1)
})
})
)
})
})

View File

@@ -0,0 +1,16 @@
// waits for all promises to be settled
//
// rejects with the first rejection if any
export const waitAll = async iterable => {
let reason
const onReject = r => {
if (reason === undefined) {
reason = r
}
}
await Promise.all(Array.from(iterable, promise => promise.catch(onReject)))
if (reason !== undefined) {
throw reason
}
}

View File

@@ -168,7 +168,7 @@ runJob.params = {
async function handleGetAllLogs(req, res) {
const logs = await this.getBackupNgLogs()
res.set('Content-Type', 'application/json')
return fromCallback(cb => pipeline(createNdJsonStream(logs), res, cb))
return fromCallback(pipeline, createNdJsonStream(logs), res)
}
export function getAllLogs({ ndjson = false }) {
@@ -225,13 +225,14 @@ deleteVmBackup.params = {
},
}
export function listVmBackups({ remotes }) {
return this.listVmBackupsNg(remotes)
export function listVmBackups({ remotes, _forceRefresh }) {
return this.listVmBackupsNg(remotes, _forceRefresh)
}
listVmBackups.permission = 'admin'
listVmBackups.params = {
_forceRefresh: { type: 'boolean', optional: true },
remotes: {
type: 'array',
items: {

View File

@@ -1,9 +1,12 @@
import createLogger from '@xen-orchestra/log'
import pump from 'pump'
import convertVmdkToVhdStream from 'xo-vmdk-to-vhd'
import { format, JsonRpcError } from 'json-rpc-peer'
import { noSuchObject } from 'xo-common/api-errors'
import { peekFooterFromVhdStream } from 'vhd-lib'
import { parseSize } from '../utils'
import { VDI_FORMAT_VHD } from '../xapi'
const log = createLogger('xo:disk')
@@ -165,3 +168,97 @@ resize.params = {
resize.resolve = {
vdi: ['id', ['VDI', 'VDI-snapshot'], 'administrate'],
}
async function handleImport(
req,
res,
{ type, name, description, vmdkData, srId, xapi }
) {
req.setTimeout(43200000) // 12 hours
try {
req.length = req.headers['content-length']
let vhdStream, size
if (type === 'vmdk') {
vhdStream = await convertVmdkToVhdStream(
req,
vmdkData.grainLogicalAddressList,
vmdkData.grainFileOffsetList
)
size = vmdkData.capacity
} else if (type === 'vhd') {
vhdStream = req
const footer = await peekFooterFromVhdStream(req)
size = footer.currentSize
} else {
throw new Error(
`Unknown disk type, expected "vhd" or "vmdk", got ${type}`
)
}
const vdi = await xapi.createVdi({
name_description: description,
name_label: name,
size,
sr: srId,
})
try {
await xapi.importVdiContent(vdi, vhdStream, VDI_FORMAT_VHD)
res.end(format.response(0, vdi.$id))
} catch (e) {
await xapi.deleteVdi(vdi)
throw e
}
} catch (e) {
res.writeHead(500)
res.end(format.error(0, new JsonRpcError(e.message)))
}
}
// type is 'vhd' or 'vmdk'
async function importDisk({ sr, type, name, description, vmdkData }) {
return {
$sendTo: await this.registerHttpRequest(handleImport, {
description,
name,
srId: sr._xapiId,
type,
vmdkData,
xapi: this.getXapi(sr),
}),
}
}
export { importDisk as import }
importDisk.params = {
description: { type: 'string', optional: true },
name: { type: 'string' },
sr: { type: 'string' },
type: { type: 'string' },
vmdkData: {
type: 'object',
optional: true,
properties: {
capacity: { type: 'integer' },
grainLogicalAddressList: {
description:
'virtual address of the blocks on the disk (LBA), in order encountered in the VMDK',
type: 'array',
items: {
type: 'integer',
},
},
grainFileOffsetList: {
description:
'offset of the grains in the VMDK file, in order encountered in the VMDK',
optional: true,
type: 'array',
items: {
type: 'integer',
},
},
},
},
}
importDisk.resolve = {
sr: ['sr', 'SR', 'administrate'],
}

View File

@@ -1,10 +1,12 @@
// TODO: Prevent token connections from creating tokens.
// TODO: Token permission.
export async function create({ expiresIn }) {
return (await this.createAuthenticationToken({
expiresIn,
userId: this.session.get('user_id'),
})).id
return (
await this.createAuthenticationToken({
expiresIn,
userId: this.session.get('user_id'),
})
).id
}
create.description = 'create a new authentication token'

View File

@@ -38,14 +38,15 @@ exportConfig.permission = 'admin'
function handleGetAllObjects(req, res, { filter, limit }) {
const objects = this.getObjects({ filter, limit })
res.set('Content-Type', 'application/json')
return fromCallback(cb => pipeline(createNdJsonStream(objects), res, cb))
return fromCallback(pipeline, createNdJsonStream(objects), res)
}
export function getAllObjects({ filter, limit, ndjson = false }) {
return ndjson
? this.registerHttpRequest(handleGetAllObjects, { filter, limit }).then(
$getFrom => ({ $getFrom })
)
? this.registerHttpRequest(handleGetAllObjects, {
filter,
limit,
}).then($getFrom => ({ $getFrom }))
: this.getObjects({ filter, limit })
}

View File

@@ -906,10 +906,9 @@ async function createNewDisk(xapi, sr, vm, diskSize) {
async function mountNewDisk(localEndpoint, hostname, newDeviceFiledeviceFile) {
const brickRootCmd =
'bash -c \'mkdir -p /bricks; for TESTVAR in {1..9}; do TESTDIR="/bricks/xosan$TESTVAR" ;if mkdir $TESTDIR; then echo $TESTDIR; exit 0; fi ; done ; exit 1\''
const newBrickRoot = (await remoteSsh(
localEndpoint,
brickRootCmd
)).stdout.trim()
const newBrickRoot = (
await remoteSsh(localEndpoint, brickRootCmd)
).stdout.trim()
const brickName = `${hostname}:${newBrickRoot}/xosandir`
const mountBrickCmd = `mkfs.xfs -i size=512 ${newDeviceFiledeviceFile}; mkdir -p ${newBrickRoot}; echo "${newDeviceFiledeviceFile} ${newBrickRoot} xfs defaults 0 0" >> /etc/fstab; mount -a`
await remoteSsh(localEndpoint, mountBrickCmd)
@@ -961,10 +960,12 @@ async function replaceBrickOnSameVM(
.split('/')
.slice(0, 3)
.join('/')
const previousBrickDevice = (await remoteSsh(
localEndpoint,
`grep " ${previousBrickRoot} " /proc/mounts | cut -d ' ' -f 1 | sed 's_/dev/__'`
)).stdout.trim()
const previousBrickDevice = (
await remoteSsh(
localEndpoint,
`grep " ${previousBrickRoot} " /proc/mounts | cut -d ' ' -f 1 | sed 's_/dev/__'`
)
).stdout.trim()
CURRENT_POOL_OPERATIONS[poolId] = { ...OPERATION_OBJECT, state: 1 }
const brickName = await mountNewDisk(
localEndpoint,
@@ -1180,7 +1181,10 @@ async function _importGlusterVM(xapi, template, lvmsrId) {
}
function _findAFreeIPAddress(nodes, networkPrefix) {
return _findIPAddressOutsideList(map(nodes, n => n.vm.ip), networkPrefix)
return _findIPAddressOutsideList(
map(nodes, n => n.vm.ip),
networkPrefix
)
}
function _findIPAddressOutsideList(

View File

@@ -1,7 +1,6 @@
import appConf from 'app-conf'
import assert from 'assert'
import authenticator from 'otplib/authenticator'
import bind from 'lodash/bind'
import blocked from 'blocked'
import compression from 'compression'
import createExpress from 'express'
@@ -16,10 +15,12 @@ import serveStatic from 'serve-static'
import stoppable from 'stoppable'
import WebServer from 'http-server-plus'
import WebSocket from 'ws'
import { forOwn, map } from 'lodash'
import { URL } from 'url'
import { compile as compilePug } from 'pug'
import { createServer as createProxyServer } from 'http-proxy'
import { fromEvent } from 'promise-toolbox'
import { fromCallback, fromEvent } from 'promise-toolbox'
import { ifDef } from '@xen-orchestra/defined'
import { join as joinPath } from 'path'
@@ -27,9 +28,9 @@ import JsonRpcPeer from 'json-rpc-peer'
import { invalidCredentials } from 'xo-common/api-errors'
import { ensureDir, readdir, readFile } from 'fs-extra'
import ensureArray from './_ensureArray'
import parseDuration from './_parseDuration'
import Xo from './xo'
import { forEach, mapToArray, pFromCallback } from './utils'
import bodyParser from 'body-parser'
import connectFlash from 'connect-flash'
@@ -72,7 +73,7 @@ async function loadConfiguration() {
log.info('Configuration loaded.')
// Print a message if deprecated entries are specified.
forEach(DEPRECATED_ENTRIES, entry => {
DEPRECATED_ENTRIES.forEach(entry => {
if (has(config, entry)) {
log.warn(`${entry} configuration is deprecated.`)
}
@@ -236,7 +237,7 @@ async function setUpPassport(express, xo, { authentication: authCfg }) {
next()
} else {
req.flash('return-url', url)
return res.redirect('/signin')
return res.redirect(authCfg.defaultSignInPage)
}
})
@@ -266,16 +267,15 @@ async function registerPlugin(pluginPath, pluginName) {
})()
// Supports both “normal” CommonJS and Babel's ES2015 modules.
const {
let {
default: factory = plugin,
configurationSchema,
configurationPresets,
testSchema,
} = plugin
let instance
// The default export can be either a factory or directly a plugin
// instance.
const instance =
const handleFactory = factory =>
typeof factory === 'function'
? factory({
xo: this,
@@ -285,6 +285,17 @@ async function registerPlugin(pluginPath, pluginName) {
},
})
: factory
;[
instance,
configurationSchema,
configurationPresets,
testSchema,
] = await Promise.all([
handleFactory(factory),
handleFactory(configurationSchema),
handleFactory(configurationPresets),
handleFactory(testSchema),
])
await this.registerPlugin(
pluginName,
@@ -325,7 +336,7 @@ async function registerPluginsInPath(path) {
})
await Promise.all(
mapToArray(files, name => {
files.map(name => {
if (name.startsWith(PLUGIN_PREFIX)) {
return registerPluginWrapper.call(
this,
@@ -339,9 +350,9 @@ async function registerPluginsInPath(path) {
async function registerPlugins(xo) {
await Promise.all(
mapToArray(
[`${__dirname}/../node_modules/`, '/usr/local/lib/node_modules/'],
xo::registerPluginsInPath
[`${__dirname}/../node_modules/`, '/usr/local/lib/node_modules/'].map(
registerPluginsInPath,
xo
)
)
}
@@ -395,7 +406,7 @@ async function createWebServer({ listen, listenOptions }) {
const webServer = stoppable(new WebServer())
await Promise.all(
mapToArray(listen, opts =>
map(listen, opts =>
makeWebServerListen(webServer, { ...listenOptions, ...opts })
)
)
@@ -413,7 +424,21 @@ const setUpProxies = (express, opts, xo) => {
const proxy = createProxyServer({
changeOrigin: true,
ignorePath: true,
}).on('error', error => console.error(error))
}).on('error', (error, req, res) => {
// `res` can be either a `ServerResponse` or a `Socket` (which does not have
// `writeHead`)
if (!res.headersSent && typeof res.writeHead === 'function') {
res.writeHead(500, { 'content-type': 'text/plain' })
res.write('There was a problem proxying this request.')
}
res.end()
const { method, url } = req
log.error('failed to proxy request', {
error,
req: { method, url },
})
})
// TODO: sort proxies by descending prefix length.
@@ -426,6 +451,8 @@ const setUpProxies = (express, opts, xo) => {
const target = opts[prefix]
proxy.web(req, res, {
agent:
new URL(target).hostname === 'localhost' ? undefined : xo.httpAgent,
target: target + url.slice(prefix.length),
})
@@ -440,7 +467,7 @@ const setUpProxies = (express, opts, xo) => {
const webSocketServer = new WebSocket.Server({
noServer: true,
})
xo.on('stop', () => pFromCallback(cb => webSocketServer.close(cb)))
xo.on('stop', () => fromCallback.call(webSocketServer, 'close'))
express.on('upgrade', (req, socket, head) => {
const { url } = req
@@ -450,6 +477,8 @@ const setUpProxies = (express, opts, xo) => {
const target = opts[prefix]
proxy.ws(req, socket, head, {
agent:
new URL(target).hostname === 'localhost' ? undefined : xo.httpAgent,
target: target + url.slice(prefix.length),
})
@@ -462,12 +491,8 @@ const setUpProxies = (express, opts, xo) => {
// ===================================================================
const setUpStaticFiles = (express, opts) => {
forEach(opts, (paths, url) => {
if (!Array.isArray(paths)) {
paths = [paths]
}
forEach(paths, path => {
forOwn(opts, (paths, url) => {
ensureArray(paths).forEach(path => {
log.info(`Setting up ${url}${path}`)
express.use(url, serveStatic(path))
@@ -483,7 +508,7 @@ const setUpApi = (webServer, xo, config) => {
noServer: true,
})
xo.on('stop', () => pFromCallback(cb => webSocketServer.close(cb)))
xo.on('stop', () => fromCallback.call(webSocketServer, 'close'))
const onConnection = (socket, upgradeReq) => {
const { remoteAddress } = upgradeReq.socket
@@ -502,7 +527,7 @@ const setUpApi = (webServer, xo, config) => {
return xo.callApiMethod(connection, message.method, message.params)
}
})
connection.notify = bind(jsonRpc.notify, jsonRpc)
connection.notify = jsonRpc.notify.bind(jsonRpc)
// Close the XO connection with this WebSocket.
socket.once('close', () => {
@@ -515,7 +540,8 @@ const setUpApi = (webServer, xo, config) => {
socket.on('message', message => {
const expiration = connection.get('expiration', undefined)
if (expiration !== undefined && expiration < Date.now()) {
return void connection.close()
connection.close()
return
}
jsonRpc.write(message)
@@ -551,7 +577,7 @@ const setUpConsoleProxy = (webServer, xo) => {
const webSocketServer = new WebSocket.Server({
noServer: true,
})
xo.on('stop', () => pFromCallback(cb => webSocketServer.close(cb)))
xo.on('stop', () => fromCallback.call(webSocketServer, 'close'))
webServer.on('upgrade', async (req, socket, head) => {
const matches = CONSOLE_PROXY_PATH_RE.exec(req.url)
@@ -644,7 +670,7 @@ export default async function main(args) {
const xo = new Xo(config)
// Register web server close on XO stop.
xo.on('stop', () => pFromCallback(cb => webServer.stop(cb)))
xo.on('stop', () => fromCallback.call(webServer, 'stop'))
// Connects to all registered servers.
await xo.start()
@@ -657,7 +683,7 @@ export default async function main(args) {
if (config.http.redirectToHttps) {
let port
forEach(config.http.listen, listen => {
forOwn(config.http.listen, listen => {
if (listen.port && (listen.cert || listen.certificate)) {
port = listen.port
return false
@@ -681,7 +707,7 @@ export default async function main(args) {
setUpConsoleProxy(webServer, xo)
// Must be set up before the API.
express.use(bind(xo._handleHttpRequest, xo))
express.use(xo._handleHttpRequest.bind(xo))
// Everything above is not protected by the sign in, allowing xo-cli
// to work properly.
@@ -709,7 +735,7 @@ export default async function main(args) {
//
// TODO: implements a timeout? (or maybe it is the services launcher
// responsibility?)
forEach(['SIGINT', 'SIGTERM'], signal => {
;['SIGINT', 'SIGTERM'].forEach(signal => {
let alreadyCalled = false
process.on(signal, () => {

View File

@@ -19,7 +19,10 @@ describe('mergeObjects', function() {
{ b: 2, c: 3 },
{ d: 4, e: 5, f: 6 },
],
'One set': [{ a: 1, b: 2 }, { a: 1, b: 2 }],
'One set': [
{ a: 1, b: 2 },
{ a: 1, b: 2 },
],
'Empty set': [{ a: 1 }, { a: 1 }, {}],
'All empty': [{}, {}, {}],
'No set': [{}],
@@ -44,28 +47,52 @@ describe('crossProduct', function() {
{
'2 sets of 2 items to multiply': [
[10, 14, 15, 21],
[[2, 3], [5, 7]],
[
[2, 3],
[5, 7],
],
multiplyTest,
],
'3 sets of 2 items to multiply': [
[110, 130, 154, 182, 165, 195, 231, 273],
[[2, 3], [5, 7], [11, 13]],
[
[2, 3],
[5, 7],
[11, 13],
],
multiplyTest,
],
'2 sets of 3 items to multiply': [
[14, 22, 26, 21, 33, 39, 35, 55, 65],
[[2, 3, 5], [7, 11, 13]],
[
[2, 3, 5],
[7, 11, 13],
],
multiplyTest,
],
'2 sets of 2 items to add': [[7, 9, 8, 10], [[2, 3], [5, 7]], addTest],
'2 sets of 2 items to add': [
[7, 9, 8, 10],
[
[2, 3],
[5, 7],
],
addTest,
],
'3 sets of 2 items to add': [
[18, 20, 20, 22, 19, 21, 21, 23],
[[2, 3], [5, 7], [11, 13]],
[
[2, 3],
[5, 7],
[11, 13],
],
addTest,
],
'2 sets of 3 items to add': [
[9, 13, 15, 10, 14, 16, 12, 16, 18],
[[2, 3, 5], [7, 11, 13]],
[
[2, 3, 5],
[7, 11, 13],
],
addTest,
],
},

View File

@@ -9,7 +9,7 @@ describe('streamToExistingBuffer()', () => {
it('read the content of a stream in a buffer', async () => {
const stream = createReadStream(__filename)
const expected = await fromCallback(cb => readFile(__filename, 'utf-8', cb))
const expected = await fromCallback(readFile, __filename, 'utf-8')
const buf = Buffer.allocUnsafe(expected.length + 1)
buf[0] = 'A'.charCodeAt()

View File

@@ -364,7 +364,7 @@ export const throwFn = error => () => {
// -------------------------------------------------------------------
export const tmpDir = () => fromCallback(cb => tmp.dir(cb))
export const tmpDir = () => fromCallback(tmp.dir)
// -------------------------------------------------------------------

View File

@@ -16,6 +16,7 @@ import {
fromEvent,
ignoreErrors,
pCatch,
pRetry,
} from 'promise-toolbox'
import { PassThrough } from 'stream'
import { forbiddenOperation } from 'xo-common/api-errors'
@@ -38,7 +39,6 @@ import { satisfies as versionSatisfies } from 'semver'
import createSizeStream from '../size-stream'
import ensureArray from '../_ensureArray'
import fatfsBuffer, { init as fatfsBufferInit } from '../fatfs-buffer'
import pRetry from '../_pRetry'
import {
camelToSnakeCase,
forEach,
@@ -94,10 +94,11 @@ export const IPV6_CONFIG_MODES = ['None', 'DHCP', 'Static', 'Autoconf']
@mixin(mapToArray(mixins))
export default class Xapi extends XapiBase {
constructor({ guessVhdSizeOnImport, ...opts }) {
constructor({ guessVhdSizeOnImport, maxUncoalescedVdis, ...opts }) {
super(opts)
this._guessVhdSizeOnImport = guessVhdSizeOnImport
this._maxUncoalescedVdis = maxUncoalescedVdis
// Patch getObject to resolve _xapiId property.
this.getObject = (getObject => (...args) => {
@@ -724,7 +725,7 @@ export default class Xapi extends XapiBase {
return promise
}
_assertHealthyVdiChain(vdi, cache) {
_assertHealthyVdiChain(vdi, cache, tolerance) {
if (vdi == null) {
return
}
@@ -754,7 +755,8 @@ export default class Xapi extends XapiBase {
const children = childrenMap[vdi.uuid]
if (
children.length === 1 &&
!children[0].managed // some SRs do not coalesce the leaf
!children[0].managed && // some SRs do not coalesce the leaf
tolerance-- <= 0
) {
throw new Error('unhealthy VDI chain')
}
@@ -762,15 +764,16 @@ export default class Xapi extends XapiBase {
this._assertHealthyVdiChain(
this.getObjectByUuid(vdi.sm_config['vhd-parent'], null),
cache
cache,
tolerance
)
}
_assertHealthyVdiChains(vm) {
_assertHealthyVdiChains(vm, tolerance = this._maxUncoalescedVdis) {
const cache = { __proto__: null }
forEach(vm.$VBDs, ({ $VDI }) => {
try {
this._assertHealthyVdiChain($VDI, cache)
this._assertHealthyVdiChain($VDI, cache, tolerance)
} catch (error) {
error.VDI = $VDI
error.VM = vm
@@ -1378,7 +1381,11 @@ export default class Xapi extends XapiBase {
}
const table = tables[entry.name]
const vhdStream = await vmdkToVhd(stream, table)
const vhdStream = await vmdkToVhd(
stream,
table.grainLogicalAddressList,
table.grainFileOffsetList
)
await this._importVdiContent(vdi, vhdStream, VDI_FORMAT_VHD)
// See: https://github.com/mafintosh/tar-stream#extracting

View File

@@ -250,19 +250,17 @@ export default class Api {
const userName = context.user ? context.user.email : '(unknown user)'
const data = {
callId: Math.random()
.toString(36)
.slice(2),
userId,
userName,
method: name,
params: sensitiveValues.replace(params, '* obfuscated *'),
timestamp: Date.now(),
}
const callId = Math.random()
.toString(36)
.slice(2)
xo.emit('xo:preCall', {
...data,
callId,
})
xo.emit('xo:preCall', data)
try {
await checkPermission.call(context, method)
@@ -305,20 +303,24 @@ export default class Api {
)}] ==> ${kindOf(result)}`
)
const now = Date.now()
xo.emit('xo:postCall', {
callId,
method: name,
...data,
duration: now - data.timestamp,
result,
timestamp: now,
})
return result
} catch (error) {
const serializedError = serializeError(error)
const now = Date.now()
xo.emit('xo:postCall', {
callId,
...data,
duration: now - data.timestamp,
error: serializedError,
method: name,
timestamp: now,
})
const message = `${userName} | ${name}(${JSON.stringify(

View File

@@ -6,6 +6,7 @@ import asyncMap from '@xen-orchestra/async-map'
import createLogger from '@xen-orchestra/log'
import defer from 'golike-defer'
import limitConcurrency from 'limit-concurrency-decorator'
import safeTimeout from 'strict-timeout/safe'
import { type Pattern, createPredicate } from 'value-matcher'
import { type Readable, PassThrough } from 'stream'
import { AssertionError } from 'assert'
@@ -44,7 +45,8 @@ import { type Schedule } from '../scheduling'
import createSizeStream from '../../size-stream'
import parseDuration from '../../_parseDuration'
import { debounceWithKey } from '../../_pDebounceWithKey'
import { debounceWithKey, REMOVE_CACHE_ENTRY } from '../../_pDebounceWithKey'
import { waitAll } from '../../_waitAll'
import {
type DeltaVmExport,
type DeltaVmImport,
@@ -77,6 +79,7 @@ type Settings = {|
exportRetention?: number,
offlineBackup?: boolean,
offlineSnapshot?: boolean,
reportRecipients?: Array<string>,
reportWhen?: ReportWhen,
snapshotRetention?: number,
timeout?: number,
@@ -150,6 +153,7 @@ const defaultSettings: Settings = {
fullInterval: 0,
offlineBackup: false,
offlineSnapshot: false,
reportRecipients: undefined,
reportWhen: 'failure',
snapshotRetention: 0,
timeout: 0,
@@ -317,21 +321,6 @@ const parseVmBackupId = (id: string) => {
}
}
// similar to Promise.all() but do not gather results
async function waitAll<T>(
promises: Promise<T>[],
onRejection: Function
): Promise<void> {
promises = promises.map(promise => {
promise = promise.catch(onRejection)
promise.catch(noop) // prevent unhandled rejection warning
return promise
})
for (const promise of promises) {
await promise
}
}
// write a stream to a file using a temporary file
//
// TODO: merge into RemoteHandlerAbstract
@@ -639,7 +628,7 @@ export default class BackupNg {
if (timeout !== 0) {
const source = CancelToken.source([cancelToken])
cancelToken = source.token
setTimeout(source.cancel, timeout)
safeTimeout(source.cancel, timeout)
}
let handleVm = async vm => {
@@ -717,6 +706,10 @@ export default class BackupNg {
})
}
await asyncMap(vms, handleVm)
remotes.forEach(({ id }) =>
this._listVmBackupsOnRemote(REMOVE_CACHE_ENTRY, id)
)
}
app.registerJobExecutor('backup', executor)
})
@@ -778,6 +771,8 @@ export default class BackupNg {
} else {
throw new Error(`no deleter for backup mode ${metadata.mode}`)
}
this._listVmBackupsOnRemote(REMOVE_CACHE_ENTRY, remoteId)
}
// Task logs emitted in a restore execution:
@@ -838,12 +833,14 @@ export default class BackupNg {
try {
const handler = await app.getRemoteHandler(remoteId)
const entries = (await handler.list(BACKUP_DIR).catch(error => {
if (error == null || error.code !== 'ENOENT') {
throw error
}
return []
})).filter(name => name !== 'index.json')
const entries = (
await handler.list(BACKUP_DIR).catch(error => {
if (error == null || error.code !== 'ENOENT') {
throw error
}
return []
})
).filter(name => name !== 'index.json')
await Promise.all(
entries.map(async vmUuid => {
@@ -881,11 +878,15 @@ export default class BackupNg {
return backupsByVm
}
async listVmBackupsNg(remotes: string[]) {
async listVmBackupsNg(remotes: string[], _forceRefresh = false) {
const backupsByVmByRemote: $Dict<$Dict<Metadata[]>> = {}
await Promise.all(
remotes.map(async remoteId => {
if (_forceRefresh) {
this._listVmBackupsOnRemote(REMOVE_CACHE_ENTRY, remoteId)
}
backupsByVmByRemote[remoteId] = await this._listVmBackupsOnRemote(
remoteId
)
@@ -1246,54 +1247,111 @@ export default class BackupNg {
const jsonMetadata = JSON.stringify(metadata)
await waitAll(
[
...remotes.map(
wrapTaskFn(
({ id }) => ({
data: { id, type: 'remote' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, { handler, id: remoteId }) => {
const fork = forkExport()
await waitAll([
...remotes.map(
wrapTaskFn(
({ id }) => ({
data: { id, type: 'remote' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, { handler, id: remoteId }) => {
const fork = forkExport()
// remove incomplete XVAs
await asyncMap(
handler.list(vmDir, {
filter: filename =>
isHiddenFile(filename) && isXva(filename),
prependDir: true,
}),
file => handler.unlink(file)
)::ignoreErrors()
// remove incomplete XVAs
await asyncMap(
handler.list(vmDir, {
filter: filename => isHiddenFile(filename) && isXva(filename),
prependDir: true,
}),
file => handler.unlink(file)
)::ignoreErrors()
const oldBackups: MetadataFull[] = (getOldEntries(
exportRetention - 1,
await this._listVmBackups(
handler,
vm,
_ => _.mode === 'full' && _.scheduleId === scheduleId
)
): any)
const oldBackups: MetadataFull[] = (getOldEntries(
exportRetention - 1,
await this._listVmBackups(
handler,
vm,
_ => _.mode === 'full' && _.scheduleId === scheduleId
)
): any)
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'clean',
parentId: taskId,
},
this._deleteFullVmBackups(handler, oldBackups)
)
const deleteFirst = getSetting(settings, 'deleteFirst', [
remoteId,
])
if (deleteFirst) {
await deleteOldBackups()
}
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'clean',
parentId: taskId,
},
this._deleteFullVmBackups(handler, oldBackups)
)
const deleteFirst = getSetting(settings, 'deleteFirst', [
remoteId,
])
if (deleteFirst) {
await deleteOldBackups()
}
await wrapTask(
{
logger,
message: 'transfer',
parentId: taskId,
result: () => ({ size: xva.size }),
},
writeStream(fork, handler, dataFilename)
)
await handler.outputFile(metadataFilename, jsonMetadata)
if (!deleteFirst) {
await deleteOldBackups()
}
}
)
),
...srs.map(
wrapTaskFn(
({ $id: id }) => ({
data: { id, type: 'SR' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, sr) => {
const fork = forkExport()
const { uuid: srUuid, xapi } = sr
// delete previous interrupted copies
ignoreErrors.call(
this._deleteVms(
xapi,
listReplicatedVms(xapi, scheduleId, undefined, vmUuid)
)
)
const oldVms = getOldEntries(
copyRetention - 1,
listReplicatedVms(xapi, scheduleId, srUuid, vmUuid)
)
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'clean',
parentId: taskId,
},
this._deleteVms(xapi, oldVms)
)
const deleteFirst = getSetting(settings, 'deleteFirst', [srUuid])
if (deleteFirst) {
await deleteOldBackups()
}
const vm = await xapi.barrier(
await wrapTask(
{
logger,
@@ -1301,104 +1359,41 @@ export default class BackupNg {
parentId: taskId,
result: () => ({ size: xva.size }),
},
writeStream(fork, handler, dataFilename)
)
await handler.outputFile(metadataFilename, jsonMetadata)
if (!deleteFirst) {
await deleteOldBackups()
}
}
)
),
...srs.map(
wrapTaskFn(
({ $id: id }) => ({
data: { id, type: 'SR' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, sr) => {
const fork = forkExport()
const { uuid: srUuid, xapi } = sr
// delete previous interrupted copies
ignoreErrors.call(
this._deleteVms(
xapi,
listReplicatedVms(xapi, scheduleId, undefined, vmUuid)
)
)
const oldVms = getOldEntries(
copyRetention - 1,
listReplicatedVms(xapi, scheduleId, srUuid, vmUuid)
)
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'clean',
parentId: taskId,
},
this._deleteVms(xapi, oldVms)
)
const deleteFirst = getSetting(settings, 'deleteFirst', [
srUuid,
])
if (deleteFirst) {
await deleteOldBackups()
}
const vm = await xapi.barrier(
await wrapTask(
{
logger,
message: 'transfer',
parentId: taskId,
result: () => ({ size: xva.size }),
},
xapi._importVm($cancelToken, fork, sr, vm =>
vm.set_name_label(
`${metadata.vm.name_label} - ${
job.name
} - (${safeDateFormat(metadata.timestamp)})`
)
xapi._importVm($cancelToken, fork, sr, vm =>
vm.set_name_label(
`${metadata.vm.name_label} - ${
job.name
} - (${safeDateFormat(metadata.timestamp)})`
)
)
)
)
await Promise.all([
vm.add_tags('Disaster Recovery'),
disableVmHighAvailability(xapi, vm),
vm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
),
!isOfflineBackup
? vm.update_other_config('xo:backup:sr', srUuid)
: vm.update_other_config({
'xo:backup:datetime': exportDateTime,
'xo:backup:job': jobId,
'xo:backup:schedule': scheduleId,
'xo:backup:sr': srUuid,
'xo:backup:vm': exported.uuid,
}),
])
await Promise.all([
vm.add_tags('Disaster Recovery'),
disableVmHighAvailability(xapi, vm),
vm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
),
!isOfflineBackup
? vm.update_other_config('xo:backup:sr', srUuid)
: vm.update_other_config({
'xo:backup:datetime': exportDateTime,
'xo:backup:job': jobId,
'xo:backup:schedule': scheduleId,
'xo:backup:sr': srUuid,
'xo:backup:vm': exported.uuid,
}),
])
if (!deleteFirst) {
await deleteOldBackups()
}
if (!deleteFirst) {
await deleteOldBackups()
}
)
),
],
noop // errors are handled in logs
)
}
)
),
]).catch(noop) // errors are handled in logs
} else if (mode === 'delta') {
let deltaChainLength = 0
let fullVdisRequired
@@ -1579,189 +1574,201 @@ export default class BackupNg {
deltaExport.vdis,
vdi => vdi.other_config['xo:base_delta'] === undefined
)
await waitAll(
[
...remotes.map(
wrapTaskFn(
({ id }) => ({
data: { id, isFull, type: 'remote' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, { handler, id: remoteId }) => {
const fork = forkExport()
await waitAll([
...remotes.map(
wrapTaskFn(
({ id }) => ({
data: { id, isFull, type: 'remote' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, { handler, id: remoteId }) => {
const fork = forkExport()
const oldBackups: MetadataDelta[] = (getOldEntries(
exportRetention - 1,
await this._listVmBackups(
handler,
vm,
_ => _.mode === 'delta' && _.scheduleId === scheduleId
)
): any)
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'merge',
parentId: taskId,
result: size => ({ size }),
},
this._deleteDeltaVmBackups(handler, oldBackups)
)
const oldBackups: MetadataDelta[] = (getOldEntries(
exportRetention - 1,
await this._listVmBackups(
handler,
vm,
_ => _.mode === 'delta' && _.scheduleId === scheduleId
)
): any)
const deleteFirst =
exportRetention > 1 &&
getSetting(settings, 'deleteFirst', [remoteId])
if (deleteFirst) {
await deleteOldBackups()
}
// FIXME: implement optimized multiple VHDs merging with synthetic
// delta
//
// For the time being, limit the number of deleted backups by run
// because it can take a very long time and can lead to
// interrupted backup with broken VHD chain.
//
// The old backups will be eventually merged in future runs of the
// job.
const { maxMergedDeltasPerRun } = this._backupOptions
if (oldBackups.length > maxMergedDeltasPerRun) {
oldBackups.length = maxMergedDeltasPerRun
}
await wrapTask(
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'transfer',
message: 'merge',
parentId: taskId,
result: size => ({ size }),
},
asyncMap(
fork.vdis,
defer(async ($defer, vdi, id) => {
const path = `${vmDir}/${metadata.vhds[id]}`
this._deleteDeltaVmBackups(handler, oldBackups)
)
const isDelta =
vdi.other_config['xo:base_delta'] !== undefined
let parentPath
if (isDelta) {
const vdiDir = dirname(path)
parentPath = (await handler.list(vdiDir, {
const deleteFirst =
exportRetention > 1 &&
getSetting(settings, 'deleteFirst', [remoteId])
if (deleteFirst) {
await deleteOldBackups()
}
await wrapTask(
{
logger,
message: 'transfer',
parentId: taskId,
result: size => ({ size }),
},
asyncMap(
fork.vdis,
defer(async ($defer, vdi, id) => {
const path = `${vmDir}/${metadata.vhds[id]}`
const isDelta =
vdi.other_config['xo:base_delta'] !== undefined
let parentPath
if (isDelta) {
const vdiDir = dirname(path)
parentPath = (
await handler.list(vdiDir, {
filter: filename =>
!isHiddenFile(filename) && isVhd(filename),
prependDir: true,
}))
.sort()
.pop()
.slice(1) // remove leading slash
// ensure parent exists and is a valid VHD
await new Vhd(handler, parentPath).readHeaderAndFooter()
}
// FIXME: should only be renamed after the metadata file has been written
await writeStream(
fork.streams[`${id}.vhd`](),
handler,
path,
{
// no checksum for VHDs, because they will be invalidated by
// merges and chainings
checksum: false,
}
})
)
$defer.onFailure.call(handler, 'unlink', path)
.sort()
.pop()
.slice(1) // remove leading slash
if (isDelta) {
await chainVhd(handler, parentPath, handler, path)
// ensure parent exists and is a valid VHD
await new Vhd(handler, parentPath).readHeaderAndFooter()
}
// FIXME: should only be renamed after the metadata file has been written
await writeStream(
fork.streams[`${id}.vhd`](),
handler,
path,
{
// no checksum for VHDs, because they will be invalidated by
// merges and chainings
checksum: false,
}
)
$defer.onFailure.call(handler, 'unlink', path)
// set the correct UUID in the VHD
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
vhd.footer.uuid = parseUuid(vdi.uuid)
await vhd.readBlockAllocationTable() // required by writeFooter()
await vhd.writeFooter()
if (isDelta) {
await chainVhd(handler, parentPath, handler, path)
}
return handler.getSize(path)
})
).then(sum)
)
await handler.outputFile(metadataFilename, jsonMetadata)
// set the correct UUID in the VHD
const vhd = new Vhd(handler, path)
await vhd.readHeaderAndFooter()
vhd.footer.uuid = parseUuid(vdi.uuid)
await vhd.readBlockAllocationTable() // required by writeFooter()
await vhd.writeFooter()
if (!deleteFirst) {
await deleteOldBackups()
}
return handler.getSize(path)
})
).then(sum)
)
await handler.outputFile(metadataFilename, jsonMetadata)
if (!deleteFirst) {
await deleteOldBackups()
}
)
),
...srs.map(
wrapTaskFn(
({ $id: id }) => ({
data: { id, isFull, type: 'SR' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, sr) => {
const fork = forkExport()
}
)
),
...srs.map(
wrapTaskFn(
({ $id: id }) => ({
data: { id, isFull, type: 'SR' },
logger,
message: 'export',
parentId: taskId,
}),
async (taskId, sr) => {
const fork = forkExport()
const { uuid: srUuid, xapi } = sr
const { uuid: srUuid, xapi } = sr
// delete previous interrupted copies
ignoreErrors.call(
this._deleteVms(
xapi,
listReplicatedVms(xapi, scheduleId, undefined, vmUuid)
)
// delete previous interrupted copies
ignoreErrors.call(
this._deleteVms(
xapi,
listReplicatedVms(xapi, scheduleId, undefined, vmUuid)
)
)
const oldVms = getOldEntries(
copyRetention - 1,
listReplicatedVms(xapi, scheduleId, srUuid, vmUuid)
)
const oldVms = getOldEntries(
copyRetention - 1,
listReplicatedVms(xapi, scheduleId, srUuid, vmUuid)
)
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'clean',
parentId: taskId,
},
this._deleteVms(xapi, oldVms)
)
const deleteFirst = getSetting(settings, 'deleteFirst', [
srUuid,
])
if (deleteFirst) {
await deleteOldBackups()
}
const { vm } = await wrapTask(
const deleteOldBackups = () =>
wrapTask(
{
logger,
message: 'transfer',
message: 'clean',
parentId: taskId,
result: ({ transferSize }) => ({ size: transferSize }),
},
xapi.importDeltaVm(fork, {
disableStartAfterImport: false, // we'll take care of that
name_label: `${metadata.vm.name_label} - ${
job.name
} - (${safeDateFormat(metadata.timestamp)})`,
srId: sr.$id,
})
this._deleteVms(xapi, oldVms)
)
await Promise.all([
vm.add_tags('Continuous Replication'),
disableVmHighAvailability(xapi, vm),
vm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
),
vm.update_other_config('xo:backup:sr', srUuid),
])
if (!deleteFirst) {
await deleteOldBackups()
}
const deleteFirst = getSetting(settings, 'deleteFirst', [srUuid])
if (deleteFirst) {
await deleteOldBackups()
}
)
),
],
noop // errors are handled in logs
)
const { vm } = await wrapTask(
{
logger,
message: 'transfer',
parentId: taskId,
result: ({ transferSize }) => ({ size: transferSize }),
},
xapi.importDeltaVm(fork, {
disableStartAfterImport: false, // we'll take care of that
name_label: `${metadata.vm.name_label} - ${
job.name
} - (${safeDateFormat(metadata.timestamp)})`,
srId: sr.$id,
})
)
await Promise.all([
vm.add_tags('Continuous Replication'),
disableVmHighAvailability(xapi, vm),
vm.update_blocked_operations(
'start',
'Start operation for this vm is blocked, clone it if you want to use it.'
),
vm.update_other_config('xo:backup:sr', srUuid),
])
if (!deleteFirst) {
await deleteOldBackups()
}
}
)
),
]).catch(noop) // errors are handled in logs
if (!isFull) {
ignoreErrors.call(

View File

@@ -828,11 +828,13 @@ export default class {
delta.vm.name_label += ` (${shortDate(datetime * 1e3)})`
delta.vm.tags.push('restored from backup')
vm = (await xapi.importDeltaVm(delta, {
disableStartAfterImport: false,
srId: sr !== undefined && sr._xapiId,
mapVdisSrs,
})).vm
vm = (
await xapi.importDeltaVm(delta, {
disableStartAfterImport: false,
srId: sr !== undefined && sr._xapiId,
mapVdisSrs,
})
).vm
} else {
throw new Error(`Unsupported delta backup version: ${version}`)
}

View File

@@ -4,19 +4,38 @@ import ProxyAgent from 'proxy-agent'
import { firstDefined } from '../utils'
export default class Http {
// whether XO has a proxy set from its own config/environment
get hasOwnHttpProxy() {
return this._hasOwnHttpProxy
}
get httpAgent() {
return this._agent
}
constructor(
_,
{ httpProxy = firstDefined(process.env.http_proxy, process.env.HTTP_PROXY) }
) {
this._proxy = httpProxy && new ProxyAgent(httpProxy)
this._hasOwnHttpProxy = httpProxy != null
this.setHttpProxy(httpProxy)
}
httpRequest(...args) {
return hrp(
{
agent: this._proxy,
agent: this._agent,
},
...args
)
}
setHttpProxy(proxy) {
if (proxy == null) {
this._agent = undefined
} else {
this._agent = new ProxyAgent(proxy)
}
}
}

View File

@@ -37,7 +37,10 @@ describe('resolveParamsVector', function() {
],
'cross product with `set` and `map`': [
// Expected result.
[{ remote: 'local', id: 'vm:2' }, { remote: 'smb', id: 'vm:2' }],
[
{ remote: 'local', id: 'vm:2' },
{ remote: 'smb', id: 'vm:2' },
],
// Entry.
{

View File

@@ -1,9 +1,10 @@
// @flow
import asyncMap from '@xen-orchestra/async-map'
import createLogger from '@xen-orchestra/log'
import { fromEvent, ignoreErrors } from 'promise-toolbox'
import { fromEvent, ignoreErrors, timeout } from 'promise-toolbox'
import { debounceWithKey } from '../_pDebounceWithKey'
import { debounceWithKey, REMOVE_CACHE_ENTRY } from '../_pDebounceWithKey'
import { waitAll } from '../_waitAll'
import parseDuration from '../_parseDuration'
import { type Xapi } from '../xapi'
import {
@@ -147,6 +148,7 @@ export default class metadataBackup {
this._app = app
this._logger = undefined
this._runningMetadataRestores = new Set()
this._poolMetadataTimeout = parseDuration(backup.poolMetadataTimeout)
const debounceDelay = parseDuration(backup.listingDebounce)
this._listXoMetadataBackups = debounceWithKey(
@@ -154,7 +156,7 @@ export default class metadataBackup {
debounceDelay,
remoteId => remoteId
)
this.__listPoolMetadataBackups = debounceWithKey(
this._listPoolMetadataBackups = debounceWithKey(
this._listPoolMetadataBackups,
debounceDelay,
remoteId => remoteId
@@ -249,6 +251,8 @@ export default class metadataBackup {
taskId: subTaskId,
}
)
this._listXoMetadataBackups(REMOVE_CACHE_ENTRY, remoteId)
} catch (error) {
await handler.rmtree(dir).catch(error => {
logger.warning(`unable to delete the folder ${dir}`, {
@@ -347,18 +351,21 @@ export default class metadataBackup {
let outputStream
try {
await Promise.all([
await waitAll([
(async () => {
outputStream = await handler.createOutputStream(fileName)
// 'readable-stream/pipeline' not call the callback when an error throws
// from the readable stream
stream.pipe(outputStream)
return fromEvent(stream, 'end').catch(error => {
if (error.message !== 'aborted') {
throw error
}
})
return timeout.call(
fromEvent(stream, 'end').catch(error => {
if (error.message !== 'aborted') {
throw error
}
}),
this._poolMetadataTimeout
)
})(),
handler.outputFile(metaDataFileName, metadata),
])
@@ -391,6 +398,8 @@ export default class metadataBackup {
taskId: subTaskId,
}
)
this._listPoolMetadataBackups(REMOVE_CACHE_ENTRY, remoteId)
} catch (error) {
if (outputStream !== undefined) {
outputStream.destroy()
@@ -493,12 +502,14 @@ export default class metadataBackup {
handlers[id] = handler
},
error => {
logger.warning(`unable to get the handler for the remote (${id})`, {
event: 'task.warning',
taskId: runJobId,
logInstantFailureTask(logger, {
data: {
error,
type: 'remote',
id,
},
error,
message: `unable to get the handler for the remote (${id})`,
parentId: runJobId,
})
}
)
@@ -805,6 +816,12 @@ export default class metadataBackup {
const [remoteId, ...path] = id.split('/')
const handler = await app.getRemoteHandler(remoteId)
return handler.rmtree(path.join('/'))
await handler.rmtree(path.join('/'))
if (path[0] === 'xo-config-backups') {
this._listXoMetadataBackups(REMOVE_CACHE_ENTRY, remoteId)
} else {
this._listPoolMetadataBackups(REMOVE_CACHE_ENTRY, remoteId)
}
}
}

View File

@@ -60,7 +60,7 @@ export default class {
const plugin = (this._plugins[id] = {
configurationPresets,
configurationSchema,
configured: !configurationSchema,
configured: configurationSchema === undefined,
description,
id,
instance,
@@ -84,12 +84,17 @@ export default class {
})
}
if (configurationSchema !== undefined) {
if (configuration === undefined) {
return
if (!plugin.configured) {
const tryEmptyConfig = configuration === undefined
try {
await this._configurePlugin(plugin, tryEmptyConfig ? {} : configuration)
} catch (error) {
// dont throw any error in case the empty config did not work
if (tryEmptyConfig) {
return
}
throw error
}
await this._configurePlugin(plugin, configuration)
}
if (autoload) {
@@ -203,8 +208,17 @@ export default class {
throw invalidParameters('plugin not configured')
}
await plugin.instance.load()
plugin.loaded = true
if (plugin.loading) {
throw invalidParameters('plugin is loading')
}
plugin.loading = true
try {
await plugin.instance.load()
plugin.loaded = true
} finally {
plugin.loading = false
}
}
async unloadPlugin(id) {

View File

@@ -104,14 +104,16 @@ export default class Scheduling {
timezone,
userId,
}: $Diff<Schedule, {| id: string |}>) {
const schedule = (await this._db.add({
cron,
enabled,
jobId,
name,
timezone,
userId,
})).properties
const schedule = (
await this._db.add({
cron,
enabled,
jobId,
name,
timezone,
userId,
})
).properties
this._start(schedule)
return schedule
}

View File

@@ -481,7 +481,7 @@ export default class {
const xapi = this._xapis[id]
return xapi === undefined
? 'disconnected'
: this._serverIdsByPool[(xapi.pool?.$id)] === id
: this._serverIdsByPool[xapi.pool?.$id] === id
? 'connected'
: 'connecting'
}

View File

@@ -1,5 +1,4 @@
# Some notes about the conversion
---
## File formats
VMDK and VHD file format share the same high level principles:
@@ -16,15 +15,22 @@ chunks.
[The VHD specification](http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc)
## A primer on VMDK
A VMDK file might contain more than one logical disk inside (sparse extent), a ascii header describes those disks.
## StreamOptimized VMDK
Each sparse extent contains "grains", whose address is designated into a "grain table". Said table is itself indexed by a "directory".
The grain table is not sparse, so the directory is useless (historical artifact).
### StreamOptimized VMDK
The streamOptimized VMDK file format was designed so that from a file on
disk an application can generate a VMDK file going forwards without ever
needing to seek() backwards. The idea is to:
needing to seek() backwards. The difference is that header, tables, directory, grains etc. are delimited by "markers"
and the table and directory are pushed at the end of the file and the grains are compressed.
The generation algorithm is:
- generate a header without a
directory address in it (-1),
- dump all the compressed chunks in the stream while generating the
- dump all the compressed grains in the stream while generating the
directory in memory
- dump the directory marker
- dump the directory and record its position
@@ -66,7 +72,7 @@ When scouring the internet for test files, we stumbled on [a strange OVA file](h
The VMDK contained in the OVA (which is a tar of various files), had a
few oddities:
- it declared having markers in it's header, but there were no marker
- it declared having markers in its header, but there were no marker
for its primary and secondary directory, nor for its footer
- its directories are at the top, and declared in the header.
- it declared being streamOptimized
@@ -91,3 +97,42 @@ one application an other.
The VHD stream doesn't declare its length, because that breaks the
downstream computation in xo-server, but with a fixed VHD file format,
we can pre-compute the exact file length and advertise it.
# The conversion from VMDK to VHD
In the browser we extract the grain table, that is a list of the file offset of all the grains and a list of the
logical address of all the grains (both lists are in the increasing offset order with matching indexes, we use to lists
for bandwidth reason). Those lists are sent to the server, where the VHD Block Allocation Table will be generated.
With the default parameters, there are 32 VMDK grains into a VHD block, so a late scheduling is used to create the BAT.
Once the BAT is generated, the VHD file is created on the fly block by block and sent on the socket towards the XAPI url.
## How VHD Block order and position is decided from the VMDK table
Let use letters to represent VHD Blocks, and number to represent their smaller VMDK constituents, and a ratio of 3 VMDK
fragment per VHD block.
`A` is the first VHD block, `A2` is the second VMDK fragment of the first VHD block.
In the VMDK file, fragments could be in any order and VHD blocks might not even be complete: `A3 E3 C1 C2 C3 A1 A2`.
We are trying to generate a VHD file while using the minimum intermediate memory possible.
When generating the VHD file Block Allocation Table we are setting in stone the order in which the block will be sent in
the VHD stream. Since we can't seek backwards in the VHD stream, we can't write a VHD block until all its VMDK fragments
have been read, so the last fragment encountered will dictate the order of the VHD Block in the file.
Let's review our previous example: `A3 E3 C1 C2 C3 A1 A2`, the block `B` doesn't appear, the block `A` has its fragment
interleaved with other blocks. So to decide the order of the blocks in the VHD file, we just go backwards and the last
time we see a block we can write it, the result of this backward collection is `A C E`:
- `A2` seen, collect `A`
- `A1` seen, skip because we already have A
- `C3` seen, collect `C`
- `C2` seen, skip
- `C1` seen, skip
- `E3` seen, collect `E`
- `A3` seen, skip (but we can infer how long we'll need to keep this fragment in memory).
We can now reverse our collection to `E C A`, and attribute addresses to the blocks, we could not do it before, because
we didn't know that `B` didn't exist or that `E` would be the first one.
When reading the VMDK file, we know that when we encounter `A3` we will have to keep it in memory until we meet `A2`.
But when we meet `E3`, we know that we can dump `E` on the VHD stream and release the memory for `E`.

View File

@@ -1,6 +1,6 @@
{
"name": "xo-vmdk-to-vhd",
"version": "0.1.7",
"version": "0.1.8",
"license": "AGPL-3.0",
"description": "JS lib streaming a vmdk file to a vhd",
"keywords": [
@@ -29,7 +29,7 @@
"pipette": "^0.9.3",
"promise-toolbox": "^0.14.0",
"tmp": "^0.1.0",
"vhd-lib": "^0.7.0"
"vhd-lib": "^0.7.2"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
@@ -38,7 +38,7 @@
"babel-plugin-lodash": "^3.3.2",
"cross-env": "^6.0.3",
"event-to-promise": "^0.8.0",
"execa": "^2.0.2",
"execa": "^3.2.0",
"fs-extra": "^8.0.1",
"get-stream": "^5.1.0",
"index-modules": "^0.3.0",

Some files were not shown because too many files have changed in this diff Show More