Compare commits

..

1 Commits

Author SHA1 Message Date
Julien Fontanet
282805966b WiP: feat(xen-api/getCachedRecord): getRecord + cache + events
Fixes #5088
2022-03-01 13:37:03 +01:00
230 changed files with 2286 additions and 8122 deletions

View File

@@ -1,7 +1,7 @@
'use strict'
module.exports = {
extends: ['plugin:eslint-comments/recommended', 'plugin:n/recommended', 'standard', 'standard-jsx', 'prettier'],
extends: ['plugin:eslint-comments/recommended', 'standard', 'standard-jsx', 'prettier'],
globals: {
__DEV__: true,
$Dict: true,
@@ -17,7 +17,6 @@ module.exports = {
{
files: ['cli.{,c,m}js', '*-cli.{,c,m}js', '**/*cli*/**/*.{,c,m}js'],
rules: {
'n/no-process-exit': 'off',
'no-console': 'off',
},
},
@@ -27,23 +26,6 @@ module.exports = {
sourceType: 'module',
},
},
{
files: ['*.spec.{,c,m}js'],
rules: {
'n/no-unsupported-features/node-builtins': [
'error',
{
version: '>=16',
},
],
'n/no-unsupported-features/es-syntax': [
'error',
{
version: '>=16',
},
],
},
},
],
parserOptions: {

16
.flowconfig Normal file
View File

@@ -0,0 +1,16 @@
[ignore]
<PROJECT_ROOT>/node_modules/.*
[include]
[libs]
[lints]
[options]
esproposal.decorators=ignore
esproposal.optional_chaining=enable
include_warnings=true
module.use_strict=true
[strict]

View File

@@ -6,18 +6,6 @@ labels: 'status: triaging :triangular_flag_on_post:, type: bug :bug:'
assignees: ''
---
**XOA or XO from the sources?**
If XOA:
- which release channel? (`stable` vs `latest`)
- please consider creating a support ticket in [your dedicated support area](https://xen-orchestra.com/#!/member/support)
If XO from the sources:
- Don't forget to [read this first](https://xen-orchestra.com/docs/community.html)
- As well as follow [this guide](https://xen-orchestra.com/docs/community.html#report-a-bug)
**Describe the bug**
A clear and concise description of what the bug is.
@@ -35,7 +23,7 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please provide the following information):**
**Desktop (please complete the following information):**
- Node: [e.g. 16.12.1]
- xo-server: [e.g. 5.82.3]

View File

@@ -1,13 +0,0 @@
name: CI
on: [push]
jobs:
build:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: satackey/action-docker-layer-caching@v0.0.11
# Ignore the failure of a step and avoid terminating the job.
continue-on-error: true
- run: docker-compose -f docker/docker-compose.dev.yml build
- run: docker-compose -f docker/docker-compose.dev.yml up

8
.gitignore vendored
View File

@@ -1,4 +1,5 @@
/_book/
/coverage/
/node_modules/
/lerna-debug.log
/lerna-debug.log.*
@@ -10,6 +11,8 @@
/packages/*/dist/
/packages/*/node_modules/
/@xen-orchestra/proxy/src/app/mixins/index.mjs
/packages/vhd-cli/src/commands/index.js
/packages/xen-api/examples/node_modules/
@@ -33,6 +36,5 @@ yarn-error.log
yarn-error.log.*
.env
# code coverage
.nyc_output/
coverage/
# nyc test coverage
.nyc_output

23
.travis.yml Normal file
View File

@@ -0,0 +1,23 @@
language: node_js
node_js:
- 14
# Use containers.
# http://docs.travis-ci.com/user/workers/container-based-infrastructure/
sudo: false
addons:
apt:
packages:
- qemu-utils
- blktap-utils
- vmdk-stream-converter
before_install:
- curl -o- -L https://yarnpkg.com/install.sh | bash
- export PATH="$HOME/.yarn/bin:$PATH"
cache:
yarn: true
script:
- yarn run travis-tests

View File

@@ -13,19 +13,15 @@ class Foo {
}
```
### `decorateClass(class, map)`
### `decorateMethodsWith(class, map)`
Decorates a number of accessors and methods directly, without using the decorator syntax:
Decorates a number of methods directly, without using the decorator syntax:
```js
import { decorateClass } from '@vates/decorate-with'
import { decorateMethodsWith } from '@vates/decorate-with'
class Foo {
get bar() {
// body
}
set bar(value) {
bar() {
// body
}
@@ -34,28 +30,22 @@ class Foo {
}
}
decorateClass(Foo, {
// getter and/or setter
bar: {
// without arguments
get: lodash.memoize,
decorateMethodsWith(Foo, {
// without arguments
bar: lodash.curry,
// with arguments
set: [lodash.debounce, 150],
},
// method (with or without arguments)
baz: lodash.curry,
// with arguments
baz: [lodash.debounce, 150],
})
```
The decorated class is returned, so you can export it directly.
To apply multiple transforms to an accessor/method, you can either call `decorateClass` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
To apply multiple transforms to a method, you can either call `decorateMethodsWith` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
```js
decorateClass(Foo, {
baz: compose([
decorateMethodsWith(Foo, {
bar: compose([
[lodash.debounce, 150]
lodash.curry,
])
@@ -79,8 +69,4 @@ class Foo {
}
```
Because it's a normal function, it can also be used with `decorateClass`, with `compose` or even by itself.
### `decorateMethodsWith(class, map)`
> Deprecated alias for [`decorateClass(class, map)`](#decorateclassclass-map).
Because it's a normal function, it can also be used with `decorateMethodsWith`, with `compose` or even by itself.

View File

@@ -31,19 +31,15 @@ class Foo {
}
```
### `decorateClass(class, map)`
### `decorateMethodsWith(class, map)`
Decorates a number of accessors and methods directly, without using the decorator syntax:
Decorates a number of methods directly, without using the decorator syntax:
```js
import { decorateClass } from '@vates/decorate-with'
import { decorateMethodsWith } from '@vates/decorate-with'
class Foo {
get bar() {
// body
}
set bar(value) {
bar() {
// body
}
@@ -52,28 +48,22 @@ class Foo {
}
}
decorateClass(Foo, {
// getter and/or setter
bar: {
// without arguments
get: lodash.memoize,
decorateMethodsWith(Foo, {
// without arguments
bar: lodash.curry,
// with arguments
set: [lodash.debounce, 150],
},
// method (with or without arguments)
baz: lodash.curry,
// with arguments
baz: [lodash.debounce, 150],
})
```
The decorated class is returned, so you can export it directly.
To apply multiple transforms to an accessor/method, you can either call `decorateClass` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
To apply multiple transforms to a method, you can either call `decorateMethodsWith` multiple times or use [`@vates/compose`](https://www.npmjs.com/package/@vates/compose):
```js
decorateClass(Foo, {
baz: compose([
decorateMethodsWith(Foo, {
bar: compose([
[lodash.debounce, 150]
lodash.curry,
])
@@ -97,11 +87,7 @@ class Foo {
}
```
Because it's a normal function, it can also be used with `decorateClass`, with `compose` or even by itself.
### `decorateMethodsWith(class, map)`
> Deprecated alias for [`decorateClass(class, map)`](#decorateclassclass-map).
Because it's a normal function, it can also be used with `decorateMethodsWith`, with `compose` or even by itself.
## Contributions

View File

@@ -9,27 +9,14 @@ exports.decorateWith = function decorateWith(fn, ...args) {
const { getOwnPropertyDescriptor, defineProperty } = Object
function applyDecorator(decorator, value) {
return typeof decorator === 'function' ? decorator(value) : decorator[0](value, ...decorator.slice(1))
}
exports.decorateClass = exports.decorateMethodsWith = function decorateClass(klass, map) {
exports.decorateMethodsWith = function decorateMethodsWith(klass, map) {
const { prototype } = klass
for (const name of Object.keys(map)) {
const decorator = map[name]
const descriptor = getOwnPropertyDescriptor(prototype, name)
if (typeof decorator === 'function' || Array.isArray(decorator)) {
descriptor.value = applyDecorator(decorator, descriptor.value)
} else {
const { get, set } = decorator
if (get !== undefined) {
descriptor.get = applyDecorator(get, descriptor.get)
}
if (set !== undefined) {
descriptor.set = applyDecorator(set, descriptor.set)
}
}
const { value } = descriptor
const decorator = map[name]
descriptor.value = typeof decorator === 'function' ? decorator(value) : decorator[0](value, ...decorator.slice(1))
defineProperty(prototype, name, descriptor)
}
return klass

View File

@@ -3,9 +3,7 @@
const assert = require('assert')
const { describe, it } = require('tap').mocha
const { decorateClass, decorateWith, decorateMethodsWith, perInstance } = require('./')
const identity = _ => _
const { decorateWith, decorateMethodsWith, perInstance } = require('./')
describe('decorateWith', () => {
it('works', () => {
@@ -33,14 +31,11 @@ describe('decorateWith', () => {
})
})
describe('decorateClass', () => {
describe('decorateMethodsWith', () => {
it('works', () => {
class C {
foo() {}
bar() {}
get baz() {}
// eslint-disable-next-line accessor-pairs
set qux(_) {}
}
const expectedArgs = [Math.random(), Math.random()]
@@ -50,74 +45,27 @@ describe('decorateClass', () => {
const newFoo = () => {}
const newBar = () => {}
const newGetBaz = () => {}
const newSetQux = _ => {}
decorateClass(C, {
foo(fn) {
decorateMethodsWith(C, {
foo(method) {
assert.strictEqual(arguments.length, 1)
assert.strictEqual(fn, P.foo)
assert.strictEqual(method, P.foo)
return newFoo
},
bar: [
function (fn, ...args) {
assert.strictEqual(fn, P.bar)
function (method, ...args) {
assert.strictEqual(method, P.bar)
assert.deepStrictEqual(args, expectedArgs)
return newBar
},
...expectedArgs,
],
baz: {
get(fn) {
assert.strictEqual(arguments.length, 1)
assert.strictEqual(fn, descriptors.baz.get)
return newGetBaz
},
},
qux: {
set: [
function (fn, ...args) {
assert.strictEqual(fn, descriptors.qux.set)
assert.deepStrictEqual(args, expectedArgs)
return newSetQux
},
...expectedArgs,
],
},
})
const newDescriptors = Object.getOwnPropertyDescriptors(P)
assert.deepStrictEqual(newDescriptors.foo, { ...descriptors.foo, value: newFoo })
assert.deepStrictEqual(newDescriptors.bar, { ...descriptors.bar, value: newBar })
assert.deepStrictEqual(newDescriptors.baz, { ...descriptors.baz, get: newGetBaz })
assert.deepStrictEqual(newDescriptors.qux, { ...descriptors.qux, set: newSetQux })
})
it('throws if using an accessor decorator for a method', function () {
assert.throws(() =>
decorateClass(
class {
foo() {}
},
{ foo: { get: identity, set: identity } }
)
)
})
it('throws if using a method decorator for an accessor', function () {
assert.throws(() =>
decorateClass(
class {
get foo() {}
},
{ foo: identity }
)
)
})
})
it('decorateMethodsWith is an alias of decorateClass', function () {
assert.strictEqual(decorateMethodsWith, decorateClass)
})
describe('perInstance', () => {

View File

@@ -20,7 +20,7 @@
"url": "https://vates.fr"
},
"license": "ISC",
"version": "2.0.0",
"version": "1.0.0",
"engines": {
"node": ">=8.10"
},
@@ -29,6 +29,6 @@
"test": "tap"
},
"devDependencies": {
"tap": "^16.0.1"
"tap": "^15.1.6"
}
}

View File

@@ -35,6 +35,6 @@
"test": "tap"
},
"devDependencies": {
"tap": "^16.0.1"
"tap": "^15.1.6"
}
}

View File

@@ -0,0 +1,3 @@
'use strict'
module.exports = require('../../@xen-orchestra/babel-config')(require('./package.json'))

View File

@@ -0,0 +1 @@
../../scripts/babel-eslintrc.js

View File

@@ -9,14 +9,28 @@
},
"version": "0.2.0",
"engines": {
"node": ">=14"
"node": ">=10"
},
"main": "dist/",
"scripts": {
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"postversion": "npm publish --access public",
"test": "tap --lines 67 --functions 92 --branches 52 --statements 67"
"prebuild": "rimraf dist/",
"predev": "yarn run prebuild",
"prepublishOnly": "yarn run build"
},
"devDependencies": {
"@babel/cli": "^7.7.4",
"@babel/core": "^7.7.4",
"@babel/plugin-proposal-decorators": "^7.8.0",
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.8.0",
"@babel/preset-env": "^7.7.4",
"cross-env": "^7.0.2",
"rimraf": "^3.0.0"
},
"dependencies": {
"@vates/decorate-with": "^2.0.0",
"@vates/decorate-with": "^1.0.0",
"@xen-orchestra/log": "^0.3.0",
"golike-defer": "^0.5.1",
"object-hash": "^2.0.1"
@@ -26,8 +40,5 @@
"author": {
"name": "Vates SAS",
"url": "https://vates.fr"
},
"devDependencies": {
"tap": "^16.0.1"
}
}

View File

@@ -1,14 +1,12 @@
'use strict'
const assert = require('assert')
const hash = require('object-hash')
const { createLogger } = require('@xen-orchestra/log')
const { decorateClass } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
import assert from 'assert'
import hash from 'object-hash'
import { createLogger } from '@xen-orchestra/log'
import { decorateWith } from '@vates/decorate-with'
import { defer } from 'golike-defer'
const log = createLogger('xo:audit-core')
exports.Storage = class Storage {
export class Storage {
constructor() {
this._lock = Promise.resolve()
}
@@ -31,7 +29,7 @@ const ID_TO_ALGORITHM = {
5: 'sha256',
}
class AlteredRecordError extends Error {
export class AlteredRecordError extends Error {
constructor(id, nValid, record) {
super('altered record')
@@ -40,9 +38,8 @@ class AlteredRecordError extends Error {
this.record = record
}
}
exports.AlteredRecordError = AlteredRecordError
class MissingRecordError extends Error {
export class MissingRecordError extends Error {
constructor(id, nValid) {
super('missing record')
@@ -50,10 +47,8 @@ class MissingRecordError extends Error {
this.nValid = nValid
}
}
exports.MissingRecordError = MissingRecordError
const NULL_ID = 'nullId'
exports.NULL_ID = NULL_ID
export const NULL_ID = 'nullId'
const HASH_ALGORITHM_ID = '5'
const createHash = (data, algorithmId = HASH_ALGORITHM_ID) =>
@@ -62,12 +57,13 @@ const createHash = (data, algorithmId = HASH_ALGORITHM_ID) =>
excludeKeys: key => key === 'id',
})}`
class AuditCore {
export class AuditCore {
constructor(storage) {
assert.notStrictEqual(storage, undefined)
this._storage = storage
}
@decorateWith(defer)
async add($defer, subject, event, data) {
const time = Date.now()
$defer(await this._storage.acquireLock())
@@ -152,6 +148,7 @@ class AuditCore {
}
}
@decorateWith(defer)
async deleteRangeAndRewrite($defer, newest, oldest) {
assert.notStrictEqual(newest, undefined)
assert.notStrictEqual(oldest, undefined)
@@ -192,9 +189,3 @@ class AuditCore {
}
}
}
exports.AuditCore = AuditCore
decorateClass(AuditCore, {
add: defer,
deleteRangeAndRewrite: defer,
})

View File

@@ -1,9 +1,6 @@
'use strict'
/* eslint-env jest */
const assert = require('assert/strict')
const { afterEach, describe, it } = require('tap').mocha
const { AlteredRecordError, AuditCore, MissingRecordError, NULL_ID, Storage } = require('.')
import { AlteredRecordError, AuditCore, MissingRecordError, NULL_ID, Storage } from '.'
const asyncIteratorToArray = async asyncIterator => {
const array = []
@@ -75,7 +72,7 @@ const auditCore = new AuditCore(db)
const storeAuditRecords = async () => {
await Promise.all(DATA.map(data => auditCore.add(...data)))
const records = await asyncIteratorToArray(auditCore.getFrom())
assert.equal(records.length, DATA.length)
expect(records.length).toBe(DATA.length)
return records
}
@@ -86,11 +83,10 @@ describe('auditCore', () => {
const [newestRecord, deletedRecord] = await storeAuditRecords()
const nValidRecords = await auditCore.checkIntegrity(NULL_ID, newestRecord.id)
assert.equal(nValidRecords, DATA.length)
expect(nValidRecords).toBe(DATA.length)
await db.del(deletedRecord.id)
await assert.rejects(
auditCore.checkIntegrity(NULL_ID, newestRecord.id),
await expect(auditCore.checkIntegrity(NULL_ID, newestRecord.id)).rejects.toEqual(
new MissingRecordError(deletedRecord.id, 1)
)
})
@@ -101,8 +97,7 @@ describe('auditCore', () => {
alteredRecord.event = ''
await db.put(alteredRecord)
await assert.rejects(
auditCore.checkIntegrity(NULL_ID, newestRecord.id),
await expect(auditCore.checkIntegrity(NULL_ID, newestRecord.id)).rejects.toEqual(
new AlteredRecordError(alteredRecord.id, 1, alteredRecord)
)
})
@@ -112,8 +107,8 @@ describe('auditCore', () => {
await auditCore.deleteFrom(secondRecord.id)
assert.equal(await db.get(firstRecord.id), undefined)
assert.equal(await db.get(secondRecord.id), undefined)
expect(await db.get(firstRecord.id)).toBe(undefined)
expect(await db.get(secondRecord.id)).toBe(undefined)
await auditCore.checkIntegrity(secondRecord.id, thirdRecord.id)
})

View File

@@ -10,7 +10,7 @@
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"engines": {
"node": ">=8.3"
"node": ">=6"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -1,3 +1,5 @@
#!/usr/bin/env node
'use strict'
// -----------------------------------------------------------------------------

View File

@@ -7,8 +7,8 @@
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"dependencies": {
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.21.0",
"@xen-orchestra/fs": "^1.0.0",
"@xen-orchestra/backups": "^0.20.0",
"@xen-orchestra/fs": "^0.20.0",
"filenamify": "^4.1.0",
"getopts": "^2.2.5",
"lodash": "^4.17.15",

View File

@@ -8,7 +8,7 @@ const { Task } = require('./Task.js')
const { watchStreamSize } = require('./_watchStreamSize.js')
exports.ImportVmBackup = class ImportVmBackup {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs = {} } = {} }) {
constructor({ adapter, metadata, srUuid, xapi, settings: { newMacAddresses, mapVdisSrs } = {} }) {
this._adapter = adapter
this._importDeltaVmSettings = { newMacAddresses, mapVdisSrs }
this._metadata = metadata
@@ -30,12 +30,7 @@ exports.ImportVmBackup = class ImportVmBackup {
} else {
assert.strictEqual(metadata.mode, 'delta')
const ignoredVdis = new Set(
Object.entries(this._importDeltaVmSettings.mapVdisSrs)
.filter(([_, srUuid]) => srUuid === null)
.map(([vdiUuid]) => vdiUuid)
)
backup = await adapter.readDeltaVmBackup(metadata, ignoredVdis)
backup = await adapter.readDeltaVmBackup(metadata)
Object.values(backup.streams).forEach(stream => watchStreamSize(stream, sizeContainer))
}

View File

@@ -6,7 +6,6 @@ const fromCallback = require('promise-toolbox/fromCallback')
const fromEvent = require('promise-toolbox/fromEvent')
const pDefer = require('promise-toolbox/defer')
const groupBy = require('lodash/groupBy.js')
const pickBy = require('lodash/pickBy.js')
const { dirname, join, normalize, resolve } = require('path')
const { createLogger } = require('@xen-orchestra/log')
const { Constants, createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } = require('vhd-lib')
@@ -577,15 +576,14 @@ class RemoteAdapter {
return stream
}
async readDeltaVmBackup(metadata, ignoredVdis) {
async readDeltaVmBackup(metadata) {
const handler = this._handler
const { vbds, vhds, vifs, vm } = metadata
const { vbds, vdis, vhds, vifs, vm } = metadata
const dir = dirname(metadata._filename)
const vdis = ignoredVdis === undefined ? metadata.vdis : pickBy(metadata.vdis, vdi => !ignoredVdis.has(vdi.uuid))
const streams = {}
await asyncMapSettled(Object.keys(vdis), async ref => {
streams[`${ref}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[ref]))
await asyncMapSettled(Object.keys(vdis), async id => {
streams[`${id}.vhd`] = await this._createSyntheticStream(handler, join(dir, vhds[id]))
})
return {

View File

@@ -3,4 +3,4 @@
exports.isMetadataFile = filename => filename.endsWith('.json')
exports.isVhdFile = filename => filename.endsWith('.vhd')
exports.isXvaFile = filename => filename.endsWith('.xva')
exports.isXvaSumFile = filename => filename.endsWith('.xva.checksum')
exports.isXvaSumFile = filename => filename.endsWith('.xva.cheksum')

View File

@@ -11,8 +11,6 @@ const { createVhdStreamWithLength } = require('vhd-lib')
const { defer } = require('golike-defer')
const { cancelableMap } = require('./_cancelableMap.js')
const { Task } = require('./Task.js')
const { pick } = require('lodash')
const TAG_BASE_DELTA = 'xo:base_delta'
exports.TAG_BASE_DELTA = TAG_BASE_DELTA
@@ -22,9 +20,6 @@ exports.TAG_COPY_SRC = TAG_COPY_SRC
const ensureArray = value => (value === undefined ? [] : Array.isArray(value) ? value : [value])
const resolveUuid = async (xapi, cache, uuid, type) => {
if (uuid == null) {
return uuid
}
let ref = cache.get(uuid)
if (ref === undefined) {
ref = await xapi.call(`${type}.get_by_uuid`, uuid)
@@ -200,25 +195,19 @@ exports.importDeltaVm = defer(async function importDeltaVm(
let suspendVdi
if (vmRecord.power_state === 'Suspended') {
const vdi = vdiRecords[vmRecord.suspend_VDI]
if (vdi === undefined) {
Task.warning('Suspend VDI not available for this suspended VM', {
vm: pick(vmRecord, 'uuid', 'name_label'),
suspendVdi = await xapi.getRecord(
'VDI',
await xapi.VDI_create({
...vdi,
other_config: {
...vdi.other_config,
[TAG_BASE_DELTA]: undefined,
[TAG_COPY_SRC]: vdi.uuid,
},
sr: mapVdisSrRefs[vdi.uuid] ?? sr.$ref,
})
} else {
suspendVdi = await xapi.getRecord(
'VDI',
await xapi.VDI_create({
...vdi,
other_config: {
...vdi.other_config,
[TAG_BASE_DELTA]: undefined,
[TAG_COPY_SRC]: vdi.uuid,
},
sr: mapVdisSrRefs[vdi.uuid] ?? sr.$ref,
})
)
$defer.onFailure(() => suspendVdi.$destroy())
}
)
$defer.onFailure(() => suspendVdi.$destroy())
}
// 1. Create the VM.

View File

@@ -1,52 +0,0 @@
- [File structure on remote](#file-structure-on-remote)
- [Structure of `metadata.json`](#structure-of-metadatajson)
- [Task logs](#task-logs)
- [During backup](#during-backup)
## File structure on remote
```
<remote>
├─ xo-config-backups
│ └─ <schedule ID>
│ └─ <YYYYMMDD>T<HHmmss>
│ ├─ metadata.json
│ └─ data.json
└─ xo-pool-metadata-backups
└─ <schedule ID>
└─ <pool UUID>
└─ <YYYYMMDD>T<HHmmss>
├─ metadata.json
└─ data
```
## Structure of `metadata.json`
```ts
interface Metadata {
jobId: String
jobName: String
scheduleId: String
scheduleName: String
timestamp: number
pool?: Pool
poolMaster?: Host
}
```
## Task logs
### During backup
```
job.start(data: { reportWhen: ReportWhen })
├─ task.start(data: { type: 'pool', id: string, pool?: Pool, poolMaster?: Host })
│ ├─ task.start(data: { type: 'remote', id: string })
│ │ └─ task.end
│ └─ task.end
├─ task.start(data: { type: 'xo' })
│ ├─ task.start(data: { type: 'remote', id: string })
│ │ └─ task.end
│ └─ task.end
└─ job.end
```

View File

@@ -1,97 +0,0 @@
- [File structure on remote](#file-structure-on-remote)
- [Attributes](#attributes)
- [Of created snapshots](#of-created-snapshots)
- [Of created VMs and snapshots](#of-created-vms-and-snapshots)
- [Of created VMs](#of-created-vms)
- [Task logs](#task-logs)
- [During backup](#during-backup)
- [During restoration](#during-restoration)
## File structure on remote
```
<remote>
└─ xo-vm-backups
├─ index.json // TODO
└─ <VM UUID>
├─ index.json // TODO
├─ vdis
│ └─ <job UUID>
│ └─ <VDI UUID>
│ ├─ index.json // TODO
│ └─ <YYYYMMDD>T<HHmmss>.vhd
├─ <YYYYMMDD>T<HHmmss>.json // backup metadata
├─ <YYYYMMDD>T<HHmmss>.xva
└─ <YYYYMMDD>T<HHmmss>.xva.checksum
```
## Attributes
### Of created snapshots
- `other_config`:
- `xo:backup:deltaChainLength` = n (number of delta copies/replicated since a full)
- `xo:backup:exported` = 'true' (added at the end of the backup)
### Of created VMs and snapshots
- `other_config`:
- `xo:backup:datetime`: format is UTC %Y%m%dT%H:%M:%SZ
- from snapshots: snapshot.snapshot_time
- with offline backup: formatDateTime(Date.now())
- `xo:backup:job` = job.id
- `xo:backup:schedule` = schedule.id
- `xo:backup:vm` = vm.uuid
### Of created VMs
- `name_label`: `${original name} - ${job name} - (${safeDateFormat(backup timestamp)})`
- tag:
- copy in delta mode: `Continuous Replication`
- copy in full mode: `Disaster Recovery`
- imported from backup: `restored from backup`
- `blocked_operations.start`: message
- for copies/replications only, added after complete transfer
- `other_config[xo:backup:sr]` = sr.uuid
## Task logs
### During backup
```
job.start(data: { mode: Mode, reportWhen: ReportWhen })
├─ task.info(message: 'vms', data: { vms: string[] })
├─ task.warning(message: string)
├─ task.start(data: { type: 'VM', id: string })
│ ├─ task.warning(message: string)
│ ├─ task.start(message: 'snapshot')
│ │ └─ task.end
│ ├─ task.start(message: 'export', data: { type: 'SR' | 'remote', id: string })
│ │ ├─ task.warning(message: string)
│ │ ├─ task.start(message: 'transfer')
│ │ │ ├─ task.warning(message: string)
│ │ │ └─ task.end(result: { size: number })
│ │ │
│ │ │ // in case of full backup, DR and CR
│ │ ├─ task.start(message: 'clean')
│ │ │ ├─ task.warning(message: string)
│ │ │ └─ task.end
│ │ │
│ │ │ // in case of delta backup
│ │ ├─ task.start(message: 'merge')
│ │ │ ├─ task.warning(message: string)
│ │ │ └─ task.end(result: { size: number })
│ │ │
│ │ └─ task.end
│ └─ task.end
└─ job.end
```
### During restoration
```
task.start(message: 'restore', data: { jobId: string, srId: string, time: number })
├─ task.start(message: 'transfer')
│ └─ task.end(result: { id: string, size: number })
└─ task.end
```

View File

@@ -8,7 +8,7 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"version": "0.21.0",
"version": "0.20.0",
"engines": {
"node": ">=14.6"
},
@@ -17,11 +17,11 @@
},
"dependencies": {
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/decorate-with": "^1.0.0",
"@vates/disposable": "^0.1.1",
"@vates/parse-duration": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/fs": "^1.0.0",
"@xen-orchestra/fs": "^0.20.0",
"@xen-orchestra/log": "^0.3.0",
"@xen-orchestra/template": "^0.1.0",
"compare-versions": "^4.0.1",
@@ -39,12 +39,8 @@
"vhd-lib": "^3.1.0",
"yazl": "^2.5.1"
},
"devDependencies": {
"rimraf": "^3.0.2",
"tmp": "^0.2.1"
},
"peerDependencies": {
"@xen-orchestra/xapi": "^0.10.0"
"@xen-orchestra/xapi": "^0.9.0"
},
"license": "AGPL-3.0-or-later",
"author": {

View File

@@ -18,7 +18,7 @@
"preferGlobal": true,
"dependencies": {
"golike-defer": "^0.5.1",
"xen-api": "^1.1.0"
"xen-api": "^0.36.0"
},
"scripts": {
"postversion": "npm publish"

View File

@@ -0,0 +1,3 @@
'use strict'
module.exports = require('../../@xen-orchestra/babel-config')(require('./package.json'))

View File

@@ -0,0 +1 @@
../../scripts/babel-eslintrc.js

View File

@@ -27,17 +27,31 @@
"url": "https://vates.fr"
},
"preferGlobal": false,
"main": "dist/",
"browserslist": [
">2%"
],
"engines": {
"node": ">=8.3"
"node": ">=6"
},
"dependencies": {
"lodash": "^4.17.4",
"moment-timezone": "^0.5.14"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
"@babel/core": "^7.0.0",
"@babel/preset-env": "^7.0.0",
"cross-env": "^7.0.2",
"rimraf": "^3.0.0"
},
"scripts": {
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
"clean": "rimraf dist/",
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"prebuild": "yarn run clean",
"predev": "yarn run clean",
"prepublishOnly": "yarn run build",
"postversion": "npm publish"
}
}

View File

@@ -1,9 +1,7 @@
'use strict'
import moment from 'moment-timezone'
const moment = require('moment-timezone')
const next = require('./next')
const parse = require('./parse')
import next from './next'
import parse from './parse'
const MAX_DELAY = 2 ** 31 - 1
@@ -96,5 +94,4 @@ class Schedule {
}
}
const createSchedule = (...args) => new Schedule(...args)
exports.createSchedule = createSchedule
export const createSchedule = (...args) => new Schedule(...args)

View File

@@ -1,8 +1,6 @@
/* eslint-env jest */
'use strict'
const { createSchedule } = require('./')
import { createSchedule } from './'
jest.useFakeTimers()

View File

@@ -1,7 +1,5 @@
'use strict'
const moment = require('moment-timezone')
const sortedIndex = require('lodash/sortedIndex')
import moment from 'moment-timezone'
import sortedIndex from 'lodash/sortedIndex'
const NEXT_MAPPING = {
month: { year: 1 },
@@ -33,7 +31,7 @@ const setFirstAvailable = (date, unit, values) => {
}
// returns the next run, after the passed date
module.exports = (schedule, fromDate) => {
export default (schedule, fromDate) => {
let date = moment(fromDate)
.set({
second: 0,

View File

@@ -1,12 +1,10 @@
/* eslint-env jest */
'use strict'
import mapValues from 'lodash/mapValues'
import moment from 'moment-timezone'
const mapValues = require('lodash/mapValues')
const moment = require('moment-timezone')
const next = require('./next')
const parse = require('./parse')
import next from './next'
import parse from './parse'
const N = (pattern, fromDate = '2018-04-09T06:25') => {
const iso = next(parse(pattern), moment.utc(fromDate)).toISOString()

View File

@@ -1,5 +1,3 @@
'use strict'
const compareNumbers = (a, b) => a - b
const createParser = ({ fields: [...fields], presets: { ...presets } }) => {
@@ -150,7 +148,7 @@ const createParser = ({ fields: [...fields], presets: { ...presets } }) => {
return parse
}
module.exports = createParser({
export default createParser({
fields: [
{
name: 'minute',

View File

@@ -1,8 +1,6 @@
/* eslint-env jest */
'use strict'
const parse = require('./parse')
import parse from './parse'
describe('parse()', () => {
it('works', () => {

View File

@@ -1,62 +0,0 @@
#!/usr/bin/env node
'use strict'
const Disposable = require('promise-toolbox/Disposable')
const { getBoundPropertyDescriptor } = require('bind-property-descriptor')
const { getSyncedHandler } = require('./')
const { getPrototypeOf, ownKeys } = Reflect
function getAllBoundDescriptors(object) {
const descriptors = { __proto__: null }
let current = object
do {
ownKeys(current).forEach(key => {
if (!(key in descriptors)) {
descriptors[key] = getBoundPropertyDescriptor(current, key, object)
}
})
} while ((current = getPrototypeOf(current)) !== null)
return descriptors
}
// https://gist.github.com/julien-f/18161f6032e808d6fa08782951ce3bfb
async function repl({ prompt, context } = {}) {
const repl = require('repl').start({
ignoreUndefined: true,
prompt,
})
if (context !== undefined) {
Object.defineProperties(repl.context, Object.getOwnPropertyDescriptors(context))
}
const { eval: evaluate } = repl
repl.eval = (cmd, context, filename, cb) => {
evaluate.call(repl, cmd, context, filename, (error, result) => {
if (error != null) {
return cb(error)
}
Promise.resolve(result).then(result => cb(undefined, result), cb)
})
}
return new Promise((resolve, reject) => {
repl.on('error', reject).on('exit', resolve)
})
}
async function* main([url]) {
if (url === undefined) {
throw new TypeError('missing arg <url>')
}
const handler = yield getSyncedHandler({ url })
await repl({
prompt: handler.type + '> ',
context: Object.create(null, getAllBoundDescriptors(handler)),
})
}
Disposable.wrap(main)(process.argv.slice(2)).catch(error => {
console.error('FATAL:', error)
process.exitCode = 1
})

View File

@@ -1,7 +1,7 @@
{
"private": false,
"name": "@xen-orchestra/fs",
"version": "1.0.0",
"version": "0.20.0",
"license": "AGPL-3.0-or-later",
"description": "The File System for Xen Orchestra backups.",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",
@@ -13,24 +13,18 @@
},
"preferGlobal": true,
"main": "dist/",
"bin": {
"xo-fs": "./cli.js"
},
"engines": {
"node": ">=14"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.54.0",
"@aws-sdk/lib-storage": "^3.54.0",
"@aws-sdk/node-http-handler": "^3.54.0",
"@marsaud/smb2": "^0.18.0",
"@sindresorhus/df": "^3.1.1",
"@vates/async-each": "^0.1.0",
"@sullux/aws-sdk": "^1.0.5",
"@vates/coalesce-calls": "^0.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/decorate-with": "^1.0.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.3.0",
"bind-property-descriptor": "^2.0.0",
"aws-sdk": "^2.686.0",
"decorator-synchronized": "^0.6.0",
"execa": "^5.0.0",
"fs-extra": "^10.0.0",
@@ -48,11 +42,12 @@
"@babel/core": "^7.0.0",
"@babel/plugin-proposal-decorators": "^7.1.6",
"@babel/plugin-proposal-function-bind": "^7.0.0",
"@babel/preset-env": "^7.8.0",
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.4.4",
"@babel/preset-env": "^7.0.0",
"async-iterator-to-stream": "^1.1.0",
"babel-plugin-lodash": "^3.3.2",
"cross-env": "^7.0.2",
"dotenv": "^16.0.0",
"dotenv": "^15.0.0",
"rimraf": "^3.0.0"
},
"scripts": {

View File

@@ -1,18 +0,0 @@
/**
* @param {Readable} inputStream
* @param {Buffer} destinationBuffer
* @returns {Promise<int>} Buffer length
* @private
*/
export default function copyStreamToBuffer(inputStream, destinationBuffer) {
return new Promise((resolve, reject) => {
let index = 0
inputStream.on('data', chunk => {
chunk.copy(destinationBuffer, index)
index += chunk.length
})
inputStream.on('end', () => resolve(index))
inputStream.on('error', err => reject(err))
})
}

View File

@@ -1,21 +0,0 @@
/* eslint-env jest */
import { Readable } from 'readable-stream'
import copyStreamToBuffer from './_copyStreamToBuffer.js'
describe('copyStreamToBuffer', () => {
it('should copy the stream to the buffer', async () => {
const stream = new Readable({
read() {
this.push('hello')
this.push(null)
},
})
const buffer = Buffer.alloc(3)
await copyStreamToBuffer(stream, buffer)
expect(buffer.toString()).toBe('hel')
})
})

View File

@@ -1,13 +0,0 @@
/**
* @param {Readable} stream
* @returns {Promise<Buffer>}
* @private
*/
export default function createBufferFromStream(stream) {
return new Promise((resolve, reject) => {
const chunks = []
stream.on('data', chunk => chunks.push(chunk))
stream.on('end', () => resolve(Buffer.concat(chunks)))
stream.on('error', error => reject(error))
})
}

View File

@@ -1,19 +0,0 @@
/* eslint-env jest */
import { Readable } from 'readable-stream'
import createBufferFromStream from './_createBufferFromStream.js'
describe('createBufferFromStream', () => {
it('should create a buffer from a stream', async () => {
const stream = new Readable({
read() {
this.push('hello')
this.push(null)
},
})
const buffer = await createBufferFromStream(stream)
expect(buffer.toString()).toBe('hello')
})
})

View File

@@ -1,4 +0,0 @@
export default function guessAwsRegion(host) {
const matches = /^s3\.([^.]+)\.amazonaws.com$/.exec(host)
return matches !== null ? matches[1] : 'us-east-1'
}

View File

@@ -1,17 +0,0 @@
/* eslint-env jest */
import guessAwsRegion from './_guessAwsRegion.js'
describe('guessAwsRegion', () => {
it('should return region from AWS URL', async () => {
const region = guessAwsRegion('s3.test-region.amazonaws.com')
expect(region).toBe('test-region')
})
it('should return default region if none is found is AWS URL', async () => {
const region = guessAwsRegion('s3.amazonaws.com')
expect(region).toBe('us-east-1')
})
})

View File

@@ -0,0 +1,9 @@
import path from 'path'
const { resolve } = path.posix
// normalize the path:
// - does not contains `.` or `..` (cannot escape root dir)
// - always starts with `/`
const normalizePath = path => resolve('/', path)
export { normalizePath as default }

View File

@@ -1,21 +0,0 @@
import path from 'path'
const { basename, dirname, join, resolve, sep } = path.posix
export { basename, dirname, join }
// normalize the path:
// - does not contains `.` or `..` (cannot escape root dir)
// - always starts with `/`
// - no trailing slash (expect for root)
// - no duplicate slashes
export const normalize = path => resolve('/', path)
export function split(path) {
const parts = normalize(path).split(sep)
// remove first (empty) entry
parts.shift()
return parts
}

View File

@@ -1,5 +1,6 @@
import asyncMapSettled from '@xen-orchestra/async-map/legacy'
import getStream from 'get-stream'
import path, { basename } from 'path'
import { coalesceCalls } from '@vates/coalesce-calls'
import { fromCallback, fromEvent, ignoreErrors, timeout } from 'promise-toolbox'
import { limitConcurrency } from 'limit-concurrency-decorator'
@@ -8,9 +9,11 @@ import { pipeline } from 'stream'
import { randomBytes } from 'crypto'
import { synchronized } from 'decorator-synchronized'
import { basename, dirname, normalize as normalizePath } from './_path'
import normalizePath from './_normalizePath'
import { createChecksumStream, validChecksumOfReadStream } from './checksum'
const { dirname } = path.posix
const checksumFile = file => file + '.checksum'
const computeRate = (hrtime, size) => {
const seconds = hrtime[0] + hrtime[1] / 1e9
@@ -548,9 +551,7 @@ export default class RemoteHandlerAbstract {
const files = await this._list(dir)
await asyncMapSettled(files, file =>
this._unlink(`${dir}/${file}`).catch(error => {
// Unlink dir behavior is not consistent across platforms
// https://github.com/nodejs/node-v0.x-archive/issues/5791
if (error.code === 'EISDIR' || error.code === 'EPERM') {
if (error.code === 'EISDIR') {
return this._rmtree(`${dir}/${file}`)
}
throw error

View File

@@ -1,32 +1,13 @@
import {
AbortMultipartUploadCommand,
CompleteMultipartUploadCommand,
CopyObjectCommand,
CreateMultipartUploadCommand,
DeleteObjectCommand,
GetObjectCommand,
HeadObjectCommand,
ListObjectsV2Command,
PutObjectCommand,
S3Client,
UploadPartCommand,
UploadPartCopyCommand,
} from '@aws-sdk/client-s3'
import { Upload } from '@aws-sdk/lib-storage'
import { NodeHttpHandler } from '@aws-sdk/node-http-handler'
import aws from '@sullux/aws-sdk'
import assert from 'assert'
import { Agent as HttpAgent } from 'http'
import { Agent as HttpsAgent } from 'https'
import http from 'http'
import https from 'https'
import pRetry from 'promise-toolbox/retry'
import { createLogger } from '@xen-orchestra/log'
import { decorateWith } from '@vates/decorate-with'
import { PassThrough, pipeline } from 'stream'
import { parse } from 'xo-remote-parser'
import copyStreamToBuffer from './_copyStreamToBuffer.js'
import createBufferFromStream from './_createBufferFromStream.js'
import guessAwsRegion from './_guessAwsRegion.js'
import RemoteHandlerAbstract from './abstract'
import { basename, join, split } from './_path'
import { asyncEach } from '@vates/async-each'
// endpoints https://docs.aws.amazon.com/general/latest/gr/s3.html
@@ -43,107 +24,78 @@ const { warn } = createLogger('xo:fs:s3')
export default class S3Handler extends RemoteHandlerAbstract {
constructor(remote, _opts) {
super(remote)
const {
allowUnauthorized,
host,
path,
username,
password,
protocol,
region = guessAwsRegion(host),
} = parse(remote.url)
this._s3 = new S3Client({
const { allowUnauthorized, host, path, username, password, protocol, region } = parse(remote.url)
const params = {
accessKeyId: username,
apiVersion: '2006-03-01',
endpoint: `${protocol}://${host}`,
forcePathStyle: true,
credentials: {
accessKeyId: username,
secretAccessKey: password,
endpoint: host,
s3ForcePathStyle: true,
secretAccessKey: password,
signatureVersion: 'v4',
httpOptions: {
timeout: 600000,
},
tls: protocol === 'https',
region,
requestHandler: new NodeHttpHandler({
socketTimeout: 600000,
httpAgent: new HttpAgent({
keepAlive: true,
}),
httpsAgent: new HttpsAgent({
rejectUnauthorized: !allowUnauthorized,
keepAlive: true,
}),
}),
})
}
if (protocol === 'http') {
params.httpOptions.agent = new http.Agent({ keepAlive: true })
params.sslEnabled = false
} else if (protocol === 'https') {
params.httpOptions.agent = new https.Agent({
rejectUnauthorized: !allowUnauthorized,
keepAlive: true,
})
}
if (region !== undefined) {
params.region = region
}
const parts = split(path)
this._bucket = parts.shift()
this._dir = join(...parts)
this._s3 = aws(params).s3
const splitPath = path.split('/').filter(s => s.length)
this._bucket = splitPath.shift()
this._dir = splitPath.join('/')
}
get type() {
return 's3'
}
_makeCopySource(path) {
return join(this._bucket, this._dir, path)
}
_makeKey(file) {
return join(this._dir, file)
}
_makePrefix(dir) {
return join(this._dir, dir, '/')
}
_createParams(file) {
return { Bucket: this._bucket, Key: this._makeKey(file) }
return { Bucket: this._bucket, Key: this._dir + file }
}
async _multipartCopy(oldPath, newPath) {
const size = await this._getSize(oldPath)
const CopySource = this._makeCopySource(oldPath)
const multipartParams = await this._s3.send(new CreateMultipartUploadCommand({ ...this._createParams(newPath) }))
const CopySource = `/${this._bucket}/${this._dir}${oldPath}`
const multipartParams = await this._s3.createMultipartUpload({ ...this._createParams(newPath) })
const param2 = { ...multipartParams, CopySource }
try {
const parts = []
let start = 0
while (start < size) {
const partNumber = parts.length + 1
const upload = await this._s3.send(
new UploadPartCopyCommand({
...multipartParams,
CopySource,
CopySourceRange: `bytes=${start}-${Math.min(start + MAX_PART_SIZE, size) - 1}`,
PartNumber: partNumber,
})
)
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partNumber })
const range = `bytes=${start}-${Math.min(start + MAX_PART_SIZE, size) - 1}`
const partParams = { ...param2, PartNumber: parts.length + 1, CopySourceRange: range }
const upload = await this._s3.uploadPartCopy(partParams)
parts.push({ ETag: upload.CopyPartResult.ETag, PartNumber: partParams.PartNumber })
start += MAX_PART_SIZE
}
await this._s3.send(
new CompleteMultipartUploadCommand({
...multipartParams,
MultipartUpload: { Parts: parts },
})
)
await this._s3.completeMultipartUpload({ ...multipartParams, MultipartUpload: { Parts: parts } })
} catch (e) {
await this._s3.send(new AbortMultipartUploadCommand(multipartParams))
await this._s3.abortMultipartUpload(multipartParams)
throw e
}
}
async _copy(oldPath, newPath) {
const CopySource = this._makeCopySource(oldPath)
const CopySource = `/${this._bucket}/${this._dir}${oldPath}`
try {
await this._s3.send(
new CopyObjectCommand({
...this._createParams(newPath),
CopySource,
})
)
await this._s3.copyObject({
...this._createParams(newPath),
CopySource,
})
} catch (e) {
// object > 5GB must be copied part by part
if (e.name === 'EntityTooLarge') {
if (e.code === 'EntityTooLarge') {
return this._multipartCopy(oldPath, newPath)
}
throw e
@@ -151,22 +103,20 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _isNotEmptyDir(path) {
const result = await this._s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
MaxKeys: 1,
Prefix: this._makePrefix(path),
})
)
return result.Contents?.length > 0
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
MaxKeys: 1,
Prefix: this._dir + path + '/',
})
return result.Contents.length !== 0
}
async _isFile(path) {
try {
await this._s3.send(new HeadObjectCommand(this._createParams(path)))
await this._s3.headObject(this._createParams(path))
return true
} catch (error) {
if (error.name === 'NotFound') {
if (error.code === 'NotFound') {
return false
}
throw error
@@ -174,23 +124,13 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _outputStream(path, input, { validator }) {
// Workaround for "ReferenceError: ReadableStream is not defined"
// https://github.com/aws/aws-sdk-js-v3/issues/2522
const Body = new PassThrough()
pipeline(input, Body, () => {})
const upload = new Upload({
client: this._s3,
queueSize: 1,
partSize: IDEAL_FRAGMENT_SIZE,
params: {
await this._s3.upload(
{
...this._createParams(path),
Body,
Body: input,
},
})
await upload.done()
{ partSize: IDEAL_FRAGMENT_SIZE, queueSize: 1 }
)
if (validator !== undefined) {
try {
await validator.call(this, path)
@@ -206,7 +146,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
// https://www.backblaze.com/b2/docs/calling.html#error_handling
@decorateWith(pRetry.wrap, {
delays: [100, 200, 500, 1000, 2000],
when: e => e.$metadata?.httpStatusCode === 500,
when: e => e.code === 'InternalError',
onRetry(error) {
warn('retrying writing file', {
attemptNumber: this.attemptNumber,
@@ -217,12 +157,7 @@ export default class S3Handler extends RemoteHandlerAbstract {
},
})
async _writeFile(file, data, options) {
return this._s3.send(
new PutObjectCommand({
...this._createParams(file),
Body: data,
})
)
return this._s3.putObject({ ...this._createParams(file), Body: data })
}
async _createReadStream(path, options) {
@@ -233,12 +168,12 @@ export default class S3Handler extends RemoteHandlerAbstract {
throw error
}
return (await this._s3.send(new GetObjectCommand(this._createParams(path)))).Body
// https://github.com/Sullux/aws-sdk/issues/11
return this._s3.getObject.raw(this._createParams(path)).createReadStream()
}
async _unlink(path) {
await this._s3.send(new DeleteObjectCommand(this._createParams(path)))
await this._s3.deleteObject(this._createParams(path))
if (await this._isNotEmptyDir(path)) {
const error = new Error(`EISDIR: illegal operation on a directory, unlink '${path}'`)
error.code = 'EISDIR'
@@ -248,40 +183,38 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
async _list(dir) {
let NextContinuationToken
const uniq = new Set()
const Prefix = this._makePrefix(dir)
function splitPath(path) {
return path.split('/').filter(d => d.length)
}
do {
const result = await this._s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Prefix,
Delimiter: '/',
// will only return path until delimiters
ContinuationToken: NextContinuationToken,
})
)
const prefix = [this._dir, dir].join('/')
const splitPrefix = splitPath(prefix)
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
Prefix: splitPrefix.join('/') + '/', // need slash at the end with the use of delimiters
Delimiter: '/', // will only return path until delimiters
})
if (result.IsTruncated) {
warn(`need pagination to browse the directory ${dir} completely`)
NextContinuationToken = result.NextContinuationToken
} else {
NextContinuationToken = undefined
}
if (result.IsTruncated) {
const error = new Error('more than 1000 objects, unsupported in this implementation')
error.dir = dir
throw error
}
// subdirectories
for (const entry of result.CommonPrefixes ?? []) {
uniq.add(basename(entry.Prefix))
}
const uniq = []
// files
for (const entry of result.Contents ?? []) {
uniq.add(basename(entry.Key))
}
} while (NextContinuationToken !== undefined)
// sub directories
for (const entry of result.CommonPrefixes) {
const line = splitPath(entry.Prefix)
uniq.push(line[line.length - 1])
}
// files
for (const entry of result.Contents) {
const line = splitPath(entry.Key)
uniq.push(line[line.length - 1])
}
return [...uniq]
return uniq
}
async _mkdir(path) {
@@ -297,14 +230,14 @@ export default class S3Handler extends RemoteHandlerAbstract {
// s3 doesn't have a rename operation, so copy + delete source
async _rename(oldPath, newPath) {
await this.copy(oldPath, newPath)
await this._s3.send(new DeleteObjectCommand(this._createParams(oldPath)))
await this._s3.deleteObject(this._createParams(oldPath))
}
async _getSize(file) {
if (typeof file !== 'string') {
file = file.fd
}
const result = await this._s3.send(new HeadObjectCommand(this._createParams(file)))
const result = await this._s3.headObject(this._createParams(file))
return +result.ContentLength
}
@@ -315,11 +248,11 @@ export default class S3Handler extends RemoteHandlerAbstract {
const params = this._createParams(file)
params.Range = `bytes=${position}-${position + buffer.length - 1}`
try {
const result = await this._s3.send(new GetObjectCommand(params))
const bytesRead = await copyStreamToBuffer(result.Body, buffer)
return { bytesRead, buffer }
const result = await this._s3.getObject(params)
result.Body.copy(buffer)
return { bytesRead: result.Body.length, buffer }
} catch (e) {
if (e.name === 'NoSuchKey') {
if (e.code === 'NoSuchKey') {
if (await this._isNotEmptyDir(file)) {
const error = new Error(`${file} is a directory`)
error.code = 'EISDIR'
@@ -346,28 +279,22 @@ export default class S3Handler extends RemoteHandlerAbstract {
// @todo : use parallel processing for unlink
async _rmtree(path) {
let NextContinuationToken
const Prefix = this._makePrefix(path)
do {
const result = await this._s3.send(
new ListObjectsV2Command({
Bucket: this._bucket,
Prefix,
ContinuationToken: NextContinuationToken,
})
)
const result = await this._s3.listObjectsV2({
Bucket: this._bucket,
Prefix: this._dir + path + '/',
ContinuationToken: NextContinuationToken,
})
NextContinuationToken = result.IsTruncated ? result.NextContinuationToken : undefined
await asyncEach(
result.Contents ?? [],
result.Contents,
async ({ Key }) => {
// _unlink will add the prefix, but Key contains everything
// also we don't need to check if we delete a directory, since the list only return files
await this._s3.send(
new DeleteObjectCommand({
Bucket: this._bucket,
Key,
})
)
await this._s3.deleteObject({
Bucket: this._bucket,
Key,
})
},
{
concurrency: 16,
@@ -383,9 +310,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
const uploadParams = this._createParams(file)
let fileSize
try {
fileSize = +(await this._s3.send(new HeadObjectCommand(uploadParams))).ContentLength
fileSize = +(await this._s3.headObject(uploadParams)).ContentLength
} catch (e) {
if (e.name === 'NotFound') {
if (e.code === 'NotFound') {
fileSize = 0
} else {
throw e
@@ -393,19 +320,10 @@ export default class S3Handler extends RemoteHandlerAbstract {
}
if (fileSize < MIN_PART_SIZE) {
const resultBuffer = Buffer.alloc(Math.max(fileSize, position + buffer.length))
if (fileSize !== 0) {
const result = await this._s3.send(new GetObjectCommand(uploadParams))
await copyStreamToBuffer(result.Body, resultBuffer)
} else {
Buffer.alloc(0).copy(resultBuffer)
}
const fileContent = fileSize !== 0 ? (await this._s3.getObject(uploadParams)).Body : Buffer.alloc(0)
fileContent.copy(resultBuffer)
buffer.copy(resultBuffer, position)
await this._s3.send(
new PutObjectCommand({
...uploadParams,
Body: resultBuffer,
})
)
await this._s3.putObject({ ...uploadParams, Body: resultBuffer })
return { buffer, bytesWritten: buffer.length }
} else {
// using this trick: https://stackoverflow.com/a/38089437/72637
@@ -416,10 +334,10 @@ export default class S3Handler extends RemoteHandlerAbstract {
// `edit` will always be an upload part
// `suffix` will always be sourced from uploadPartCopy()
// Then everything will be sliced in 5Gb parts before getting uploaded
const multipartParams = await this._s3.send(new CreateMultipartUploadCommand(uploadParams))
const multipartParams = await this._s3.createMultipartUpload(uploadParams)
const copyMultipartParams = {
...multipartParams,
CopySource: this._makeCopySource(file),
CopySource: `/${this._bucket}/${this._dir + file}`,
}
try {
const parts = []
@@ -446,20 +364,14 @@ export default class S3Handler extends RemoteHandlerAbstract {
assert.strictEqual(fragmentEnd - prefixPosition <= MAX_PART_SIZE, true)
const range = `bytes=${prefixPosition}-${fragmentEnd - 1}`
const copyPrefixParams = { ...copyMultipartParams, PartNumber: partNumber++, CopySourceRange: range }
const part = await this._s3.send(new UploadPartCopyCommand(copyPrefixParams))
const part = await this._s3.uploadPartCopy(copyPrefixParams)
parts.push({ ETag: part.CopyPartResult.ETag, PartNumber: copyPrefixParams.PartNumber })
prefixPosition += prefixFragmentSize
}
if (prefixLastFragmentSize) {
// grab everything from the prefix that was too small to be copied, download and merge to the edit buffer.
const downloadParams = { ...uploadParams, Range: `bytes=${prefixPosition}-${prefixSize - 1}` }
let prefixBuffer
if (prefixSize > 0) {
const result = await this._s3.send(new GetObjectCommand(downloadParams))
prefixBuffer = await createBufferFromStream(result.Body)
} else {
prefixBuffer = Buffer.alloc(0)
}
const prefixBuffer = prefixSize > 0 ? (await this._s3.getObject(downloadParams)).Body : Buffer.alloc(0)
editBuffer = Buffer.concat([prefixBuffer, buffer])
editBufferOffset -= prefixLastFragmentSize
}
@@ -474,12 +386,11 @@ export default class S3Handler extends RemoteHandlerAbstract {
hasSuffix = suffixSize > 0
const prefixRange = `bytes=${complementOffset}-${complementOffset + complementSize - 1}`
const downloadParams = { ...uploadParams, Range: prefixRange }
const result = await this._s3.send(new GetObjectCommand(downloadParams))
const complementBuffer = await createBufferFromStream(result.Body)
const complementBuffer = (await this._s3.getObject(downloadParams)).Body
editBuffer = Buffer.concat([editBuffer, complementBuffer])
}
const editParams = { ...multipartParams, Body: editBuffer, PartNumber: partNumber++ }
const editPart = await this._s3.send(new UploadPartCommand(editParams))
const editPart = await this._s3.uploadPart(editParams)
parts.push({ ETag: editPart.ETag, PartNumber: editParams.PartNumber })
if (hasSuffix) {
// use ceil because the last fragment can be arbitrarily small.
@@ -490,19 +401,17 @@ export default class S3Handler extends RemoteHandlerAbstract {
assert.strictEqual(Math.min(fileSize, fragmentEnd) - suffixFragmentOffset <= MAX_PART_SIZE, true)
const suffixRange = `bytes=${suffixFragmentOffset}-${Math.min(fileSize, fragmentEnd) - 1}`
const copySuffixParams = { ...copyMultipartParams, PartNumber: partNumber++, CopySourceRange: suffixRange }
const suffixPart = (await this._s3.send(new UploadPartCopyCommand(copySuffixParams))).CopyPartResult
const suffixPart = (await this._s3.uploadPartCopy(copySuffixParams)).CopyPartResult
parts.push({ ETag: suffixPart.ETag, PartNumber: copySuffixParams.PartNumber })
suffixFragmentOffset = fragmentEnd
}
}
await this._s3.send(
new CompleteMultipartUploadCommand({
...multipartParams,
MultipartUpload: { Parts: parts },
})
)
await this._s3.completeMultipartUpload({
...multipartParams,
MultipartUpload: { Parts: parts },
})
} catch (e) {
await this._s3.send(new AbortMultipartUploadCommand(multipartParams))
await this._s3.abortMultipartUpload(multipartParams)
throw e
}
}

View File

@@ -1,14 +1,14 @@
import { parse } from 'xo-remote-parser'
import MountHandler from './_mount'
import { normalize } from './_path'
import normalizePath from './_normalizePath'
export default class SmbMountHandler extends MountHandler {
constructor(remote, opts) {
const { domain = 'WORKGROUP', host, password, path, username } = parse(remote.url)
super(remote, opts, {
type: 'cifs',
device: '//' + host + normalize(path),
device: '//' + host + normalizePath(path),
options: `domain=${domain}`,
env: {
USER: username,

View File

@@ -20,7 +20,7 @@
">2%"
],
"engines": {
"node": ">=8.3"
"node": ">=6"
},
"dependencies": {
"lodash": "^4.17.4",

View File

@@ -1,8 +1,8 @@
'use strict'
const fromCallback = require('promise-toolbox/fromCallback')
const nodemailer = require('nodemailer') // eslint-disable-line n/no-extraneous-require
const prettyFormat = require('pretty-format') // eslint-disable-line n/no-extraneous-require
const nodemailer = require('nodemailer') // eslint-disable-line n/no-extraneous-import
const prettyFormat = require('pretty-format') // eslint-disable-line n/no-extraneous-import
const { evalTemplate, required } = require('../utils')
const { NAMES } = require('../levels')

View File

@@ -1,9 +1,7 @@
'use strict'
const fromCallback = require('promise-toolbox/fromCallback')
// eslint-disable-next-line n/no-missing-require
const splitHost = require('split-host')
// eslint-disable-next-line n/no-missing-require
const { createClient, Facility, Severity, Transport } = require('syslog-client')
const LEVELS = require('../levels')

View File

@@ -0,0 +1,3 @@
'use strict'
module.exports = require('../../@xen-orchestra/babel-config')(require('./package.json'))

View File

@@ -0,0 +1 @@
../../scripts/babel-eslintrc.js

View File

@@ -18,11 +18,12 @@
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"preferGlobal": true,
"main": "dist/",
"bin": {
"xo-proxy-cli": "./index.js"
"xo-proxy-cli": "dist/index.js"
},
"engines": {
"node": ">=14"
"node": ">=12"
},
"dependencies": {
"@iarna/toml": "^2.2.0",
@@ -38,8 +39,23 @@
"pumpify": "^2.0.1",
"split2": "^4.1.0"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
"@babel/core": "^7.0.0",
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.7.4",
"@babel/plugin-proposal-optional-chaining": "^7.0.0",
"@babel/preset-env": "^7.0.0",
"cross-env": "^7.0.2",
"rimraf": "^3.0.0"
},
"scripts": {
"postversion": "npm publish --access public"
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
"clean": "rimraf dist/",
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"postversion": "npm publish --access public",
"prebuild": "yarn run clean",
"predev": "yarn run prebuild",
"prepublishOnly": "yarn run build"
},
"author": {
"name": "Vates SAS",

View File

@@ -1,25 +1,23 @@
#!/usr/bin/env node
'use strict'
import assert from 'assert'
import colors from 'ansi-colors'
import contentType from 'content-type'
import CSON from 'cson-parser'
import fromCallback from 'promise-toolbox/fromCallback'
import fs from 'fs'
import getopts from 'getopts'
import hrp from 'http-request-plus'
import split2 from 'split2'
import pumpify from 'pumpify'
import { extname, join } from 'path'
import { format, parse } from 'json-rpc-protocol'
import { inspect } from 'util'
import { load as loadConfig } from 'app-conf'
import { pipeline } from 'stream'
import { readChunk } from '@vates/read-chunk'
const assert = require('assert')
const colors = require('ansi-colors')
const contentType = require('content-type')
const CSON = require('cson-parser')
const fromCallback = require('promise-toolbox/fromCallback')
const fs = require('fs')
const getopts = require('getopts')
const hrp = require('http-request-plus')
const split2 = require('split2')
const pumpify = require('pumpify')
const { extname, join } = require('path')
const { format, parse } = require('json-rpc-protocol')
const { inspect } = require('util')
const { load: loadConfig } = require('app-conf')
const { pipeline } = require('stream')
const { readChunk } = require('@vates/read-chunk')
const pkg = require('./package.json')
import pkg from '../package.json'
const FORMATS = {
__proto__: null,

View File

@@ -0,0 +1 @@
../../scripts/babel-eslintrc.js

View File

@@ -1,7 +1,7 @@
{
"private": true,
"name": "@xen-orchestra/proxy",
"version": "0.20.1",
"version": "0.19.0",
"license": "AGPL-3.0-or-later",
"description": "XO Proxy used to remotely execute backup jobs",
"keywords": [
@@ -19,7 +19,7 @@
},
"preferGlobal": true,
"bin": {
"xo-proxy": "./index.mjs"
"xo-proxy": "dist/index.mjs"
},
"engines": {
"node": ">=14.18"
@@ -28,16 +28,16 @@
"@iarna/toml": "^2.2.0",
"@koa/router": "^10.0.0",
"@vates/compose": "^2.1.0",
"@vates/decorate-with": "^2.0.0",
"@vates/decorate-with": "^1.0.0",
"@vates/disposable": "^0.1.1",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/backups": "^0.21.0",
"@xen-orchestra/fs": "^1.0.0",
"@xen-orchestra/backups": "^0.20.0",
"@xen-orchestra/fs": "^0.20.0",
"@xen-orchestra/log": "^0.3.0",
"@xen-orchestra/mixin": "^0.1.0",
"@xen-orchestra/mixins": "^0.2.0",
"@xen-orchestra/self-signed": "^0.1.0",
"@xen-orchestra/xapi": "^0.10.0",
"@xen-orchestra/xapi": "^0.9.0",
"ajv": "^8.0.3",
"app-conf": "^2.0.0",
"async-iterator-to-stream": "^1.1.0",
@@ -59,19 +59,32 @@
"source-map-support": "^0.5.16",
"stoppable": "^1.0.6",
"xdg-basedir": "^5.1.0",
"xen-api": "^1.1.0",
"xo-common": "^0.8.0"
"xen-api": "^0.36.0",
"xo-common": "^0.7.0"
},
"devDependencies": {
"@babel/cli": "^7.0.0",
"@babel/core": "^7.0.0",
"@babel/plugin-proposal-class-properties": "^7.1.0",
"@babel/plugin-proposal-decorators": "^7.0.0",
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.7.4",
"@babel/plugin-proposal-optional-chaining": "^7.0.0",
"@babel/preset-env": "^7.0.0",
"@vates/toggle-scripts": "^1.0.0",
"ws": "^8.5.0"
"babel-plugin-transform-dev": "^2.0.1",
"cross-env": "^7.0.2",
"index-modules": "^0.4.3"
},
"scripts": {
"_build": "index-modules --index-file index.mjs src/app/mixins && babel --delete-dir-on-start --keep-file-extension --source-maps --out-dir=dist/ src/",
"build": "cross-env NODE_ENV=production yarn run _build",
"dev": "cross-env NODE_ENV=development yarn run _build --watch",
"_postinstall": "./scripts/systemd-service-installer",
"postpack": "toggle-scripts -postinstall -preuninstall",
"prepack": "toggle-scripts +postinstall +preuninstall",
"prepublishOnly": "yarn run build",
"_preuninstall": "./scripts/systemd-service-installer",
"start": "./index.mjs"
"start": "./dist/index.mjs"
},
"author": {
"name": "Vates SAS",

View File

@@ -3,17 +3,11 @@ import Hooks from '@xen-orchestra/mixins/Hooks.js'
import mixin from '@xen-orchestra/mixin'
import { createDebounceResource } from '@vates/disposable/debounceResource.js'
import Api from './mixins/api.mjs'
import Appliance from './mixins/appliance.mjs'
import Authentication from './mixins/authentication.mjs'
import Backups from './mixins/backups.mjs'
import Logs from './mixins/logs.mjs'
import Remotes from './mixins/remotes.mjs'
import ReverseProxy from './mixins/reverseProxy.mjs'
import mixins from './mixins/index.mjs'
export default class App {
constructor(opts) {
mixin(this, { Api, Appliance, Authentication, Backups, Config, Hooks, Logs, Remotes, ReverseProxy }, [opts])
mixin(this, { Config, Hooks, ...mixins }, [opts])
const debounceResource = createDebounceResource()
this.config.watchDuration('resourceCacheDelay', delay => {

View File

@@ -4,7 +4,7 @@ import { asyncMap } from '@xen-orchestra/async-map'
import { Backup } from '@xen-orchestra/backups/Backup.js'
import { compose } from '@vates/compose'
import { createLogger } from '@xen-orchestra/log'
import { decorateMethodsWith } from '@vates/decorate-with'
import { decorateWith } from '@vates/decorate-with'
import { deduped } from '@vates/disposable/deduped.js'
import { defer } from 'golike-defer'
import { DurablePartition } from '@xen-orchestra/backups/DurablePartition.js'
@@ -106,9 +106,11 @@ export default class Backups {
})(run)
run = (run =>
async function () {
const license = await app.appliance.getSelfLicense()
if (license === undefined) {
throw new JsonRpcError('no valid proxy license')
if (!__DEV__) {
const license = await app.appliance.getSelfLicense()
if (license === undefined) {
throw new JsonRpcError('no valid proxy license')
}
}
return run.apply(this, arguments)
})(run)
@@ -401,6 +403,12 @@ export default class Backups {
})
}
// FIXME: invalidate cache on remote option change
@decorateWith(compose, function (resource) {
return this._app.debounceResource(resource)
})
@decorateWith(deduped, remote => [remote.url])
@decorateWith(Disposable.factory)
*getAdapter(remote) {
const app = this._app
return new RemoteAdapter(yield app.remotes.getHandler(remote), {
@@ -410,6 +418,12 @@ export default class Backups {
})
}
// FIXME: invalidate cache on options change
@decorateWith(compose, function (resource) {
return this._app.debounceResource(resource)
})
@decorateWith(deduped, ({ url }) => [url])
@decorateWith(Disposable.factory)
async *getXapi({ credentials: { username: user, password }, ...opts }) {
const xapi = new Xapi({
...this._app.config.get('xapiOptions'),
@@ -430,28 +444,3 @@ export default class Backups {
}
}
}
decorateMethodsWith(Backups, {
getAdapter: compose({ right: true }, [
// FIXME: invalidate cache on remote option change
[
compose,
function (resource) {
return this._app.debounceResource(resource)
},
],
[deduped, remote => [remote.url]],
Disposable.factory,
]),
getXapi: compose({ right: true }, [
// FIXME: invalidate cache on remote option change
[
compose,
function (resource) {
return this._app.debounceResource(resource)
},
],
[deduped, xapi => [xapi.url]],
Disposable.factory,
]),
})

View File

@@ -1,6 +1,6 @@
import Disposable from 'promise-toolbox/Disposable'
import { compose } from '@vates/compose'
import { decorateMethodsWith } from '@vates/decorate-with'
import { decorateWith } from '@vates/decorate-with'
import { deduped } from '@vates/disposable/deduped.js'
import { getHandler } from '@xen-orchestra/fs'
@@ -35,6 +35,11 @@ export default class Remotes {
})
}
// FIXME: invalidate cache on remote option change
@decorateWith(compose, function (resource) {
return this._app.debounceResource(resource)
})
@decorateWith(deduped, remote => [remote.url])
async getHandler(remote) {
const { config } = this._app
const handler = getHandler(remote, config.get('remoteOptions'))
@@ -47,16 +52,3 @@ export default class Remotes {
return new Disposable(() => handler.forget(), handler)
}
}
decorateMethodsWith(Remotes, {
getHandler: compose({ right: true }, [
// FIXME: invalidate cache on remote option change
[
compose,
function (resource) {
return this._app.debounceResource(resource)
},
],
[deduped, remote => [remote.url]],
]),
})

View File

@@ -28,7 +28,7 @@ export function backendToLocalPath(basePath, target, backendUrl) {
}
export function localToBackendUrl(basePath, target, localPath) {
let localPathWithoutBase = removeSlash(localPath.substring(basePath.length))
let localPathWithoutBase = removeSlash(localPath).substring(basePath.length)
localPathWithoutBase = './' + removeSlash(localPathWithoutBase)
const url = mergeUrl(localPathWithoutBase, target)
return url
@@ -73,12 +73,11 @@ export default class ReverseProxy {
return
}
const { path, target, ...options } = config
const url = new URL(target)
const targetUrl = localToBackendUrl(path, url, req.originalUrl || req.url)
const url = new URL(config.target)
const targetUrl = localToBackendUrl(config.path, url, req.originalUrl || req.url)
proxy.web(req, res, {
...urlToHttpOptions(targetUrl),
...options,
...config.options,
onReq: (req, { headers }) => {
headers['x-forwarded-for'] = req.socket.remoteAddress
headers['x-forwarded-proto'] = req.socket.encrypted ? 'https' : 'http'

View File

@@ -17,9 +17,9 @@ catchGlobalErrors(createLogger('xo:proxy'))
const { fatal, info, warn } = createLogger('xo:proxy:bootstrap')
const APP_DIR = new URL('.', import.meta.url).pathname
const APP_DIR = new URL('..', import.meta.url).pathname
const APP_NAME = 'xo-proxy'
const APP_VERSION = JSON.parse(fse.readFileSync(new URL('package.json', import.meta.url))).version
const APP_VERSION = JSON.parse(fse.readFileSync(new URL('../package.json', import.meta.url))).version
// -------------------------------------------------------------------

View File

@@ -1,4 +1,4 @@
import ReverseProxy, { backendToLocalPath, localToBackendUrl } from '../app/mixins/reverseProxy.mjs'
import ReverseProxy, { backendToLocalPath, localToBackendUrl } from '../dist/app/mixins/reverseProxy.mjs'
import { deepEqual, strictEqual } from 'assert'
function makeApp(reverseProxies) {

View File

@@ -36,14 +36,14 @@
"fs-extra": "^10.0.0",
"get-stream": "^6.0.0",
"http-request-plus": "^0.14.0",
"human-format": "^1.0.0",
"human-format": "^0.11.0",
"lodash": "^4.17.4",
"pretty-ms": "^7.0.0",
"progress-stream": "^2.0.0",
"pw": "^0.0.4",
"xdg-basedir": "^4.0.0",
"xo-lib": "^0.11.1",
"xo-vmdk-to-vhd": "^2.2.0"
"xo-vmdk-to-vhd": "^2.1.0"
},
"devDependencies": {
"@babel/cli": "^7.0.0",

View File

@@ -0,0 +1,3 @@
'use strict'
module.exports = require('../../@xen-orchestra/babel-config')(require('./package.json'))

View File

@@ -0,0 +1 @@
../../scripts/babel-eslintrc.js

View File

@@ -1,19 +0,0 @@
'use strict'
module.exports = class Host {
async restartAgent(ref) {
const agentStartTime = +(await this.getField('host', ref, 'other_config')).agent_start_time
await this.call('host.restart_agent', ref)
await new Promise(resolve => {
// even though the ref could change in case of pool master restart, tests show it stays the same
const stopWatch = this.watchObject(ref, host => {
if (+host.other_config.agent_start_time > agentStartTime) {
stopWatch()
resolve()
}
})
})
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@xen-orchestra/xapi",
"version": "0.10.0",
"version": "0.9.0",
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/xapi",
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
"repository": {
@@ -8,27 +8,43 @@
"type": "git",
"url": "https://github.com/vatesfr/xen-orchestra.git"
},
"main": "dist/",
"bin": {
"xo-xapi": "./cli.js"
"xo-xapi": "./dist/cli.js"
},
"engines": {
"node": ">=14"
"node": ">=8.10"
},
"devDependencies": {
"@babel/cli": "^7.2.3",
"@babel/core": "^7.3.3",
"@babel/plugin-proposal-decorators": "^7.3.0",
"@babel/preset-env": "^7.3.1",
"cross-env": "^7.0.2",
"rimraf": "^3.0.0",
"xo-common": "^0.7.0"
},
"peerDependencies": {
"xen-api": "^1.1.0"
"xen-api": "^0.36.0"
},
"scripts": {
"postversion": "npm publish --access public"
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
"clean": "rimraf dist/",
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
"postversion": "npm publish --access public",
"prebuild": "yarn run clean",
"predev": "yarn run prebuild",
"prepare": "yarn run build",
"prepublishOnly": "yarn run build"
},
"dependencies": {
"@vates/decorate-with": "^2.0.0",
"@vates/decorate-with": "^1.0.0",
"@xen-orchestra/async-map": "^0.1.2",
"@xen-orchestra/log": "^0.3.0",
"d3-time-format": "^3.0.0",
"golike-defer": "^0.5.1",
"lodash": "^4.17.15",
"promise-toolbox": "^0.21.0",
"xo-common": "^0.8.0"
"promise-toolbox": "^0.21.0"
},
"private": false,
"license": "AGPL-3.0-or-later",

View File

@@ -1,5 +1,3 @@
'use strict'
const OPAQUE_REF_RE = /OpaqueRef:[0-9a-z-]+/
module.exports = str => {

View File

@@ -1,5 +1,3 @@
'use strict'
const RUNNING_POWER_STATES = {
Running: true,
Paused: true,

View File

@@ -1,7 +1,5 @@
#!/usr/bin/env node
'use strict'
const { Xapi } = require('./')
require('xen-api/dist/cli.js')
.default(opts => new Xapi({ ignoreNobakVdis: true, ...opts }))

View File

@@ -1,5 +1,3 @@
'use strict'
const assert = require('assert')
const pRetry = require('promise-toolbox/retry')
const { utcFormat, utcParse } = require('d3-time-format')
@@ -163,10 +161,6 @@ class Xapi extends Base {
return stopWatch
}
// Watch an object for changes.
//
// Predicate can be either an id, a UUID, an opaque reference or a
// function.
watchObject(predicate, cb) {
if (typeof predicate === 'function') {
const genericWatchers = this._genericWatchers
@@ -212,7 +206,6 @@ function mixin(mixins) {
}
mixin({
task: require('./task.js'),
host: require('./host.js'),
VBD: require('./vbd.js'),
VDI: require('./vdi.js'),
VIF: require('./vif.js'),

View File

@@ -1,3 +1 @@
'use strict'
module.exports = vmTpl => vmTpl.is_default_template || vmTpl.other_config.default_template === 'true'

View File

@@ -1,5 +1,3 @@
'use strict'
const ignoreErrors = require('promise-toolbox/ignoreErrors')
module.exports = class Task {

View File

@@ -1,5 +1,3 @@
'use strict'
const identity = require('lodash/identity.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
const { Ref } = require('xen-api')

View File

@@ -1,19 +1,21 @@
'use strict'
const CancelToken = require('promise-toolbox/CancelToken')
const pCatch = require('promise-toolbox/catch')
const pRetry = require('promise-toolbox/retry')
const { decorateClass } = require('@vates/decorate-with')
const { decorateWith } = require('@vates/decorate-with')
const extractOpaqueRef = require('./_extractOpaqueRef.js')
const noop = Function.prototype
class Vdi {
module.exports = class Vdi {
async clone(vdiRef) {
return extractOpaqueRef(await this.callAsync('VDI.clone', vdiRef))
}
// work around a race condition in XCP-ng/XenServer where the disk is not fully unmounted yet
@decorateWith(pRetry.wrap, function () {
return this._vdiDestroyRetryWhenInUse
})
async destroy(vdiRef) {
await pCatch.call(
this.callAsync('VDI.destroy', vdiRef),
@@ -111,14 +113,3 @@ class Vdi {
}
}
}
module.exports = Vdi
decorateClass(Vdi, {
// work around a race condition in XCP-ng/XenServer where the disk is not fully unmounted yet
destroy: [
pRetry.wrap,
function () {
return this._vdiDestroyRetryWhenInUse
},
],
})

View File

@@ -1,5 +1,3 @@
'use strict'
const isVmRunning = require('./_isVmRunning.js')
module.exports = class Vif {

View File

@@ -1,5 +1,3 @@
'use strict'
const CancelToken = require('promise-toolbox/CancelToken')
const groupBy = require('lodash/groupBy.js')
const ignoreErrors = require('promise-toolbox/ignoreErrors')
@@ -9,7 +7,7 @@ const pCatch = require('promise-toolbox/catch')
const pRetry = require('promise-toolbox/retry')
const { asyncMap } = require('@xen-orchestra/async-map')
const { createLogger } = require('@xen-orchestra/log')
const { decorateClass } = require('@vates/decorate-with')
const { decorateWith } = require('@vates/decorate-with')
const { defer } = require('golike-defer')
const { incorrectState } = require('xo-common/api-errors.js')
const { Ref } = require('xen-api')
@@ -55,7 +53,7 @@ async function safeGetRecord(xapi, type, ref) {
const noop = Function.prototype
class Vm {
module.exports = class Vm {
async _assertHealthyVdiChain(vdiRefOrUuid, cache, tolerance) {
let vdi = cache[vdiRefOrUuid]
if (vdi === undefined) {
@@ -134,21 +132,7 @@ class Vm {
name_label = await this.getField('VM', vmRef, 'name_label')
}
try {
const ref = await this.callAsync(cancelToken, 'VM.checkpoint', vmRef, name_label).then(extractOpaqueRef)
// VM checkpoints are marked as templates, unfortunately it does not play well with XVA export/import
// which will import them as templates and not VM checkpoints or plain VMs
await pCatch.call(
this.setField('VM', ref, 'is_a_template', false),
// Ignore if this fails due to license restriction
//
// see https://bugs.xenserver.org/browse/XSO-766
{ code: 'LICENSE_RESTRICTION' },
noop
)
return ref
return await this.callAsync(cancelToken, 'VM.checkpoint', vmRef, name_label).then(extractOpaqueRef)
} catch (error) {
if (error.code === 'VM_BAD_POWER_STATE') {
return this.VM_snapshot(vmRef, { cancelToken, name_label })
@@ -157,6 +141,7 @@ class Vm {
}
}
@decorateWith(defer)
async create(
$defer,
{
@@ -340,7 +325,7 @@ class Vm {
// destroyed even if this fails
await this.call('VM.destroy', vmRef)
await Promise.all([
return Promise.all([
asyncMap(vm.snapshots, snapshotRef =>
this.VM_destroy(snapshotRef).catch(error => {
warn('VM_destroy: failed to destroy snapshot', {
@@ -373,6 +358,7 @@ class Vm {
])
}
@decorateWith(defer)
async export($defer, vmRef, { cancelToken = CancelToken.none, compress = false, useSnapshot } = {}) {
const vm = await this.getRecord('VM', vmRef)
const taskRef = await this.task_create('VM export', vm.name_label)
@@ -477,6 +463,7 @@ class Vm {
}
}
@decorateWith(defer)
async snapshot($defer, vmRef, { cancelToken = CancelToken.none, name_label } = {}) {
const vm = await this.getRecord('VM', vmRef)
// cannot unplug VBDs on Running, Paused and Suspended VMs
@@ -583,10 +570,3 @@ class Vm {
return ref
}
}
module.exports = Vm
decorateClass(Vm, {
create: defer,
export: defer,
snapshot: defer,
})

View File

@@ -1,79 +1,8 @@
# ChangeLog
## **5.69.2** (2022-04-13)
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Enhancements
- [Rolling Pool Update] New algorithm for XCP-ng updates (PR [#6188](https://github.com/vatesfr/xen-orchestra/pull/6188))
### Bug fixes
- [Plugins] Automatically configure plugins when a configuration file is imported (PR [#6171](https://github.com/vatesfr/xen-orchestra/pull/6171))
- [VMDK Export] Fix `VBOX_E_FILE_ERROR (0x80BB0004)` when importing in VirtualBox (PR [#6163](https://github.com/vatesfr/xen-orchestra/pull/6163))
- [Backup] Fix "Cannot read properties of undefined" error when restoring from a proxied remote (PR [#6179](https://github.com/vatesfr/xen-orchestra/pull/6179))
- [Rolling Pool Update] Fix "cannot read properties of undefined" error [#6170](https://github.com/vatesfr/xen-orchestra/issues/6170) (PR [#6186](https://github.com/vatesfr/xen-orchestra/pull/6186))
### Released packages
- xen-api 1.1.0
- xo-vmdk-to-vhd 2.2.0
- @xen-orchestra/proxy 0.20.1
- xo-server 5.90.2
## **5.69.1** (2022-03-31)
### Bug fixes
- [Backup] Fix `plan enterprise is not defined in the PLANS object` (PR [#6168](https://github.com/vatesfr/xen-orchestra/pull/6168))
### Released packages
- xo-server 5.90.2
## **5.69.0** (2022-03-31)
### Highlights
- [REST API] Expose networks, VBDs, VDIs and VIFs
- [Console] Supports host and VM consoles behind HTTP proxies [#6133](https://github.com/vatesfr/xen-orchestra/pull/6133)
- [Install patches] Disable patch installation when `High Availability` is enabled (PR [#6145](https://github.com/vatesfr/xen-orchestra/pull/6145))
- [Delta Backup/Restore] Ability to ignore some VDIs (PR [#6143](https://github.com/vatesfr/xen-orchestra/pull/6143))
- [Import VM] Ability to import a VM from a URL (PR [#6130](https://github.com/vatesfr/xen-orchestra/pull/6130))
### Enhancements
- [Rolling Pool Update] Don't update if some of the hosts are not running
- [VM form] Add link to documentation on secure boot in the Advanced tab (PR [#6146](https://github.com/vatesfr/xen-orchestra/pull/6146))
- [Install patches] Update confirmation messages for patch installation (PR [#6159](https://github.com/vatesfr/xen-orchestra/pull/6159))
### Bug fixes
- [Rolling Pool Update] Don't fail if `load-balancer` plugin is missing (Starter and Enterprise plans)
- [Backup/Restore] Fix missing backups on Backblaze
- [Templates] Fix "incorrect state" error when trying to delete a default template [#6124](https://github.com/vatesfr/xen-orchestra/issues/6124) (PR [#6119](https://github.com/vatesfr/xen-orchestra/pull/6119))
- [New SR] Fix "SR_BACKEND_FAILURE_103" error when selecting "No selected value" for the path [#5991](https://github.com/vatesfr/xen-orchestra/issues/5991) (PR [#6137](https://github.com/vatesfr/xen-orchestra/pull/6137))
- [Jobs] Fix "invalid parameters" error when running jobs in some cases (PR [#6156](https://github.com/vatesfr/xen-orchestra/pull/6156))
- [New SR] Take NFS version and options into account when creating an ISO SR
- Allow a decimal when displaying small values (e.g. show _1.4 TiB_ instead of _1 TiB_ for 1,400 GiB of RAM)
### Released packages
- xo-common 0.8.0
- @vates/decorate-with 2.0.0
- xen-api 1.0.0
- @xen-orchestra/xapi 0.10.0
- @xen-orchestra/fs 1.0.0
- vhd-cli 0.7.0
- @xen-orchestra/backups 0.21.0
- @xen-orchestra/proxy 0.20.0
- xo-server 5.90.1
- xo-web 5.95.0
## **5.68.0** (2022-02-28)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
### Highlights
@@ -106,6 +35,8 @@
## **5.67.0** (2022-01-31)
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
### Highlights
- [Rolling Pool Update] Automatically pause load balancer plugin during the update [#5711](https://github.com/vatesfr/xen-orchestra/issues/5711)

View File

@@ -7,8 +7,6 @@
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [VM export] Feat export to `ova` format (PR [#6006](https://github.com/vatesfr/xen-orchestra/pull/6006))
### Bug fixes
> Users must be able to say: “I had this issue, happy to know it's fixed”
@@ -30,6 +28,4 @@
>
> In case of conflict, the highest (lowest in previous list) `$version` wins.
- xo-vmdk-to-vhd minor
- xo-server minor
- xo-web minor
- xo-server patch

View File

@@ -1,5 +1,3 @@
'use strict'
module.exports = {
// Necessary for jest to be able to find the `.babelrc.js` closest to the file
// instead of only the one in this directory.

View File

@@ -1,24 +0,0 @@
#last version of ubuntu with blktap-utils
FROM ubuntu:xenial
# https://qastack.fr/programming/25899912/how-to-install-nvm-in-docker
RUN apt-get update
RUN apt-get install -y curl qemu-utils blktap-utils vmdk-stream-converter git libxml2-utils
ENV NVM_DIR /usr/local/nvm
RUN mkdir -p /usr/local/nvm
RUN cd /usr/local/nvm
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
ENV NODE_VERSION v17.0.1
RUN /bin/bash -c "source $NVM_DIR/nvm.sh && nvm install $NODE_VERSION && nvm use --delete-prefix $NODE_VERSION"
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/$NODE_VERSION/bin:$PATH
RUN npm install -g yarn
WORKDIR /xen-orchestra
# invalidate build on package change
COPY ./yarn.lock /xen-orchestra/yarn.lock
ENTRYPOINT yarn ci

View File

@@ -1,11 +0,0 @@
version: '3.3'
services:
xo:
build:
context: "${PWD}"
dockerfile: "${PWD}/docker/Dockerfile"
ports:
- 8000:80
volumes:
- ${PWD}:/xen-orchestra

View File

@@ -52,7 +52,6 @@ module.exports = {
['/advanced', 'Advanced features'],
['/load_balancing', 'VM Load Balancing'],
['/sdn_controller', 'SDN Controller'],
['/restapi', 'REST API'],
['/xosan', 'XOSANv1'],
['/xosanv2', 'XOSANv2'],
],

View File

@@ -339,7 +339,7 @@ XO will try to find the right prefix for each IP address. If it can't find a pre
:::
- Generate a token:
- Go to Profile > API Tokens > Add a token
- Go to Admin > Tokens > Add token
- Create a token with "Write enabled"
- The owner of the token must have at least the following permissions:
- View permissions on:
@@ -351,15 +351,15 @@ XO will try to find the right prefix for each IP address. If it can't find a pre
- virtualization > clusters
- virtualization > interfaces
- virtualization > virtual-machines
- Add a UUID custom field:
- Go to Other > Custom fields > Add
- Add a UUID custom field (for **Netbox 2.x**):
- Got to Admin > Custom fields > Add custom field
- Create a custom field called "uuid" (lower case!)
- Assign it to object types `virtualization > cluster` and `virtualization > virtual machine`
![](./assets/customfield.png)
:::tip
In Netbox 2.x, custom fields can be created from the Admin panel > Custom fields > Add custom field.
In Netbox 3.x, custom fields can be found directly in the site (no need to go in the admin section). It's available in "Other/Customization/Custom Fields". After creation of the `uuid` field, assign it to the object types `virtualization > cluster` and `virtualization > virtual machine`.
:::
### In Xen Orchestra

View File

@@ -236,10 +236,10 @@ encoding by prefixing with `json:`:
## API
Our web UI (`xo-web`) and CLI (`xo-cli`) both talk to `xo-server` via the same API. This API works in a kind of "connected mode", using JSON-RPC through websockets, in a way where we can subscribe to any events to always stay up-to-date on the client side.
Because `xo-server` is already requested by our web UI (`xo-web`) or CLI (`xo-cli`), there's an API. You can use it directly to have advanced integration in your IT infrastructure (automation, or as a VPS vendor etc.).
:::warning
However, this API was initially meant to only be private. Also, as it's JSON-RPC inside websockets, it's not trivial to use. If you want to make calls in an easy fashion, you should take a look at our [REST API](restapi.md#rest-api).
However, this API isn't 100% guarantee to be stable. Use it with caution.
:::
If you need assistance on how to use it:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Some files were not shown because too many files have changed in this diff Show More