Compare commits
368 Commits
fix_fallba
...
xapi-typeg
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4b5d97f978 | ||
|
|
8c14906a60 | ||
|
|
62591e1f6f | ||
|
|
ea4a888c5e | ||
|
|
281a1cc549 | ||
|
|
d52dcd0708 | ||
|
|
d8e01b2867 | ||
|
|
dca3f39156 | ||
|
|
31e964fe0f | ||
|
|
39d973c43f | ||
|
|
55f921959d | ||
|
|
6598090662 | ||
|
|
d7f29e7363 | ||
|
|
82df6089c3 | ||
|
|
80cc66964e | ||
|
|
7883d38622 | ||
|
|
2cb5169b6d | ||
|
|
9ad2c07984 | ||
|
|
a9c1239149 | ||
|
|
cb1223f72e | ||
|
|
4dc7575d5b | ||
|
|
276d1ce60a | ||
|
|
58ab32a623 | ||
|
|
c1846e6ff3 | ||
|
|
826de17111 | ||
|
|
8a09ea8bc1 | ||
|
|
1297c925ad | ||
|
|
74d15e1a92 | ||
|
|
ae373c3e77 | ||
|
|
e9b90caa3a | ||
|
|
b89e77a6a4 | ||
|
|
61691ac46b | ||
|
|
512b96af24 | ||
|
|
d369593979 | ||
|
|
2f38e0564b | ||
|
|
5e8dd4e4bc | ||
|
|
8f9f1f566d | ||
|
|
d7870b8860 | ||
|
|
97fa23f890 | ||
|
|
f839887da8 | ||
|
|
15bfaa15ca | ||
|
|
4a3183ffa0 | ||
|
|
18d03a076b | ||
|
|
4bed4195ac | ||
|
|
a963878af5 | ||
|
|
d6c3dc87e0 | ||
|
|
5391a9a5ad | ||
|
|
b50e95802c | ||
|
|
75a9799e96 | ||
|
|
dbb9e4d60f | ||
|
|
d27b6bd49d | ||
|
|
c5d2726faa | ||
|
|
a2a98c490f | ||
|
|
e2dc1d98f1 | ||
|
|
658c26d3c9 | ||
|
|
612095789a | ||
|
|
7418d9f670 | ||
|
|
f344c58a62 | ||
|
|
36b94f745d | ||
|
|
08cdcf4112 | ||
|
|
76813737ef | ||
|
|
53d15d6a77 | ||
|
|
dd01b62b87 | ||
|
|
9fab15537b | ||
|
|
d87db05b2b | ||
|
|
f1f32c962c | ||
|
|
ad149740b1 | ||
|
|
9a4e938b91 | ||
|
|
a226760b07 | ||
|
|
a11450c3a7 | ||
|
|
e0cab4f937 | ||
|
|
468250f291 | ||
|
|
d04b93c17e | ||
|
|
911556a1aa | ||
|
|
c7d3230eef | ||
|
|
b63086bf09 | ||
|
|
a4118a5676 | ||
|
|
26e7e6467c | ||
|
|
1c9552fa58 | ||
|
|
9875cb5575 | ||
|
|
d1c6bb8829 | ||
|
|
ef7005a291 | ||
|
|
8068b83ffe | ||
|
|
f01a89710c | ||
|
|
38ced81ada | ||
|
|
9834632d59 | ||
|
|
bb4504dd50 | ||
|
|
8864c2f2db | ||
|
|
19208472e6 | ||
|
|
10c77ba3cc | ||
|
|
cd28fd4945 | ||
|
|
6778d6aa4a | ||
|
|
433851d771 | ||
|
|
d157fd3528 | ||
|
|
9150823c37 | ||
|
|
07c3a44441 | ||
|
|
051bbf9449 | ||
|
|
22ea1c0e2a | ||
|
|
6432a44860 | ||
|
|
493d861de3 | ||
|
|
82452e9616 | ||
|
|
2fbeaa618a | ||
|
|
6c08afaa0e | ||
|
|
af4cc1f574 | ||
|
|
2fb27b26cd | ||
|
|
11e09e1f87 | ||
|
|
9ccb5f8aa9 | ||
|
|
af87d6a0ea | ||
|
|
d847f45cb3 | ||
|
|
38c615609a | ||
|
|
144cc4b82f | ||
|
|
d24ab141e9 | ||
|
|
8505374fcf | ||
|
|
e53d961fc3 | ||
|
|
dc8ca7a8ee | ||
|
|
3d1b87d9dc | ||
|
|
01fa2af5cd | ||
|
|
20a89ca45a | ||
|
|
16ca2f8da9 | ||
|
|
30fe9764ad | ||
|
|
e246c8ee47 | ||
|
|
ba03a48498 | ||
|
|
b96dd0160a | ||
|
|
49890a09b7 | ||
|
|
dfce56cee8 | ||
|
|
a6fee2946a | ||
|
|
34c849ee89 | ||
|
|
c7192ed3bf | ||
|
|
4d3dc0c5f7 | ||
|
|
9ba4afa073 | ||
|
|
3ea4422d13 | ||
|
|
de2e314f7d | ||
|
|
2380fb42fe | ||
|
|
95b76076a3 | ||
|
|
b415d4c34c | ||
|
|
2d82b6dd6e | ||
|
|
16b1935f12 | ||
|
|
50ec614b2a | ||
|
|
9e11a0af6e | ||
|
|
0c3e42e0b9 | ||
|
|
36b31bb0b3 | ||
|
|
c03c41450b | ||
|
|
dfc2b5d88b | ||
|
|
87e3e3ffe3 | ||
|
|
dae37c6a50 | ||
|
|
c7df11cc6f | ||
|
|
87f1f208c3 | ||
|
|
ba8c5d740e | ||
|
|
c275d5d999 | ||
|
|
cfc53c9c94 | ||
|
|
87df917157 | ||
|
|
395d87d290 | ||
|
|
aff8ec08ad | ||
|
|
4d40b56d85 | ||
|
|
667d0724c3 | ||
|
|
a49395553a | ||
|
|
cce09bd9cc | ||
|
|
03a66e4690 | ||
|
|
fd752fee80 | ||
|
|
8a71f84733 | ||
|
|
9ef2c7da4c | ||
|
|
8975073416 | ||
|
|
d1c1378c9d | ||
|
|
7941284a1d | ||
|
|
af2d17b7a5 | ||
|
|
3ca2b01d9a | ||
|
|
67193a2ab7 | ||
|
|
9757aa36de | ||
|
|
29854a9f87 | ||
|
|
b12c179470 | ||
|
|
bbef15e4e4 | ||
|
|
c483929a0d | ||
|
|
1741f395dd | ||
|
|
0f29262797 | ||
|
|
31ed477b96 | ||
|
|
9e5de5413d | ||
|
|
0f297a81a4 | ||
|
|
89313def99 | ||
|
|
8e0be4edaf | ||
|
|
a8dfdfb922 | ||
|
|
f096024248 | ||
|
|
4f50f90213 | ||
|
|
4501902331 | ||
|
|
df19679dba | ||
|
|
9f5a2f67f9 | ||
|
|
2d5c406325 | ||
|
|
151b8a8940 | ||
|
|
cda027b94a | ||
|
|
ee2117abf6 | ||
|
|
6e7294d49f | ||
|
|
062e45f697 | ||
|
|
d18b39990d | ||
|
|
7387ac2411 | ||
|
|
4186592f9f | ||
|
|
6c9d5a72a6 | ||
|
|
83690a4dd4 | ||
|
|
c11e03ab26 | ||
|
|
c7d8709267 | ||
|
|
6579deffad | ||
|
|
e2739e7a4b | ||
|
|
c0d587f541 | ||
|
|
05a96ffc14 | ||
|
|
32a47444d7 | ||
|
|
9ff5de5f33 | ||
|
|
09badf33d0 | ||
|
|
1643d3637f | ||
|
|
b962e9ebe8 | ||
|
|
66f3528e10 | ||
|
|
a5e9f051a2 | ||
|
|
63bfb76516 | ||
|
|
f88f7d41aa | ||
|
|
877383ac85 | ||
|
|
dd5e11e835 | ||
|
|
3d43550ffe | ||
|
|
115bc8fa0a | ||
|
|
15c46e324c | ||
|
|
df38366066 | ||
|
|
28b13ccfff | ||
|
|
26a433ebbe | ||
|
|
1902595190 | ||
|
|
80146cfb58 | ||
|
|
03d2d6fc94 | ||
|
|
379e4d7596 | ||
|
|
9860bd770b | ||
|
|
2af5328a0f | ||
|
|
4084a44f83 | ||
|
|
ba7c7ddb23 | ||
|
|
2351e7b98c | ||
|
|
d353dc622c | ||
|
|
3ef6adfd02 | ||
|
|
5063a6982a | ||
|
|
0008f2845c | ||
|
|
a0994bc428 | ||
|
|
8fe0d97aec | ||
|
|
a8b3c02780 | ||
|
|
f3489fb57c | ||
|
|
434b5b375d | ||
|
|
445120f9f5 | ||
|
|
71b11f0d9c | ||
|
|
8297a9e0e7 | ||
|
|
4999672f2d | ||
|
|
70608ed7e9 | ||
|
|
a0836ebdd7 | ||
|
|
2b1edd1d4c | ||
|
|
42bb7cc973 | ||
|
|
8299c37bb7 | ||
|
|
7a2005c20c | ||
|
|
ae0eb9e66e | ||
|
|
052126613a | ||
|
|
7959657bd6 | ||
|
|
9f8bb376ea | ||
|
|
ee8e2fa906 | ||
|
|
33a380b173 | ||
|
|
6e5b6996fa | ||
|
|
6409dc276c | ||
|
|
98f7ce43e3 | ||
|
|
aa076e1d2d | ||
|
|
7a096d1b5c | ||
|
|
93b17ccddd | ||
|
|
68c118c3e5 | ||
|
|
c0b0ba433f | ||
|
|
d7d81431ef | ||
|
|
7451f45885 | ||
|
|
c9882001a9 | ||
|
|
837b06ef2b | ||
|
|
0e49150b8e | ||
|
|
0ec5f4bf68 | ||
|
|
601730d737 | ||
|
|
28eb4b21bd | ||
|
|
a5afe0bca1 | ||
|
|
ad5691dcb2 | ||
|
|
80974fa1dc | ||
|
|
78330a0e11 | ||
|
|
b6cff2d784 | ||
|
|
cae3555ca7 | ||
|
|
1f9cf458ec | ||
|
|
d9ead2d9f5 | ||
|
|
92660fd03e | ||
|
|
5393d847f0 | ||
|
|
231f09de12 | ||
|
|
b75ca2700b | ||
|
|
bae7ef9067 | ||
|
|
8ec8a3b4d9 | ||
|
|
5b7228ed69 | ||
|
|
b02bf90c8a | ||
|
|
7d3546734e | ||
|
|
030013eb5b | ||
|
|
da181345a6 | ||
|
|
30874b2206 | ||
|
|
2ed6b2dc87 | ||
|
|
41532f35d1 | ||
|
|
7a198a44cd | ||
|
|
77d615d15b | ||
|
|
c7bc397c85 | ||
|
|
38388cc297 | ||
|
|
a7b17b2b8c | ||
|
|
d93afc4648 | ||
|
|
24449e41bb | ||
|
|
df6f3ed165 | ||
|
|
ca5914dbfb | ||
|
|
3c3a1f8981 | ||
|
|
01810f35b2 | ||
|
|
5db4083414 | ||
|
|
8bf3a747f0 | ||
|
|
f0e817a8d9 | ||
|
|
b181c59698 | ||
|
|
cfa094f208 | ||
|
|
9ee5a8d089 | ||
|
|
819127da57 | ||
|
|
6e9659a797 | ||
|
|
07bd9cadd4 | ||
|
|
a1bcd35e26 | ||
|
|
1a741e18fd | ||
|
|
2e133dd0fb | ||
|
|
ecae554a78 | ||
|
|
4bed50b4ed | ||
|
|
c92b371d9e | ||
|
|
35e6bb30db | ||
|
|
1aaa123f47 | ||
|
|
a8c507a1df | ||
|
|
581e3c358f | ||
|
|
e4f1b8f2e0 | ||
|
|
29e8a7fd7e | ||
|
|
4af289c492 | ||
|
|
cd95793054 | ||
|
|
ab71578cf2 | ||
|
|
df07d4a393 | ||
|
|
2518395c03 | ||
|
|
50f3ab7798 | ||
|
|
2d01056ea9 | ||
|
|
f40fb3bab3 | ||
|
|
fe7c60654d | ||
|
|
728b640ff8 | ||
|
|
55c247e5d0 | ||
|
|
6be15b780a | ||
|
|
150c552ef9 | ||
|
|
7005c1f5e5 | ||
|
|
a66ae33d5d | ||
|
|
8ed8447665 | ||
|
|
e740719732 | ||
|
|
bfd9238f6d | ||
|
|
cca47a8149 | ||
|
|
3ecf099fe0 | ||
|
|
6f56dc0339 | ||
|
|
20108208d0 | ||
|
|
0706e6f4ff | ||
|
|
af85df611c | ||
|
|
3c1239cfb8 | ||
|
|
50d144bf93 | ||
|
|
9a5a03d032 | ||
|
|
854ae0f65e | ||
|
|
4fb34ffee9 | ||
|
|
bbf3dae37f | ||
|
|
e69f58eb86 | ||
|
|
c9475ddc65 | ||
|
|
31d085b6a1 | ||
|
|
173866236f | ||
|
|
b176780527 | ||
|
|
89c72fdbad | ||
|
|
7d6e832226 | ||
|
|
c024346475 | ||
|
|
95ec5929b4 | ||
|
|
1646c50a94 | ||
|
|
b1429e1df3 | ||
|
|
6da0aa376f | ||
|
|
1ab5503558 | ||
|
|
4b9db257fd | ||
|
|
96f83d92fc |
1
.github/ISSUE_TEMPLATE/feature_request.md
vendored
1
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -4,7 +4,6 @@ about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -10,8 +10,6 @@
|
||||
/packages/*/dist/
|
||||
/packages/*/node_modules/
|
||||
|
||||
/packages/vhd-cli/src/commands/index.js
|
||||
|
||||
/packages/xen-api/examples/node_modules/
|
||||
/packages/xen-api/plot.dat
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ Returns a promise wich rejects as soon as a call to `iteratee` throws or a promi
|
||||
|
||||
`opts` is an object that can contains the following options:
|
||||
|
||||
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `1`
|
||||
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `10`. The value `0` means no concurrency limit.
|
||||
- `signal`: an abort signal to stop the iteration
|
||||
- `stopOnError`: wether to stop iteration of first error, or wait for all calls to finish and throw an `AggregateError`, defaults to `true`
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ Returns a promise wich rejects as soon as a call to `iteratee` throws or a promi
|
||||
|
||||
`opts` is an object that can contains the following options:
|
||||
|
||||
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `1`
|
||||
- `concurrency`: a number which indicates the maximum number of parallel call to `iteratee`, defaults to `10`. The value `0` means no concurrency limit.
|
||||
- `signal`: an abort signal to stop the iteration
|
||||
- `stopOnError`: wether to stop iteration of first error, or wait for all calls to finish and throw an `AggregateError`, defaults to `true`
|
||||
|
||||
|
||||
@@ -9,7 +9,16 @@ class AggregateError extends Error {
|
||||
}
|
||||
}
|
||||
|
||||
exports.asyncEach = function asyncEach(iterable, iteratee, { concurrency = 1, signal, stopOnError = true } = {}) {
|
||||
/**
|
||||
* @template Item
|
||||
* @param {Iterable<Item>} iterable
|
||||
* @param {(item: Item, index: number, iterable: Iterable<Item>) => Promise<void>} iteratee
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
exports.asyncEach = function asyncEach(iterable, iteratee, { concurrency = 10, signal, stopOnError = true } = {}) {
|
||||
if (concurrency === 0) {
|
||||
concurrency = Infinity
|
||||
}
|
||||
return new Promise((resolve, reject) => {
|
||||
const it = (iterable[Symbol.iterator] || iterable[Symbol.asyncIterator]).call(iterable)
|
||||
const errors = []
|
||||
|
||||
@@ -36,7 +36,7 @@ describe('asyncEach', () => {
|
||||
it('works', async () => {
|
||||
const iteratee = jest.fn(async () => {})
|
||||
|
||||
await asyncEach.call(thisArg, iterable, iteratee)
|
||||
await asyncEach.call(thisArg, iterable, iteratee, { concurrency: 1 })
|
||||
|
||||
expect(iteratee.mock.instances).toEqual(Array.from(values, () => thisArg))
|
||||
expect(iteratee.mock.calls).toEqual(Array.from(values, (value, index) => [value, index, iterable]))
|
||||
@@ -66,7 +66,7 @@ describe('asyncEach', () => {
|
||||
}
|
||||
})
|
||||
|
||||
expect(await rejectionOf(asyncEach(iterable, iteratee, { stopOnError: true }))).toBe(error)
|
||||
expect(await rejectionOf(asyncEach(iterable, iteratee, { concurrency: 1, stopOnError: true }))).toBe(error)
|
||||
expect(iteratee).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
@@ -91,7 +91,9 @@ describe('asyncEach', () => {
|
||||
}
|
||||
})
|
||||
|
||||
await expect(asyncEach(iterable, iteratee, { signal: ac.signal })).rejects.toThrow('asyncEach aborted')
|
||||
await expect(asyncEach(iterable, iteratee, { concurrency: 1, signal: ac.signal })).rejects.toThrow(
|
||||
'asyncEach aborted'
|
||||
)
|
||||
expect(iteratee).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"license": "ISC",
|
||||
"version": "0.1.0",
|
||||
"version": "1.0.0",
|
||||
"engines": {
|
||||
"node": ">=8.10"
|
||||
},
|
||||
|
||||
30
@vates/cached-dns.lookup/.USAGE.md
Normal file
30
@vates/cached-dns.lookup/.USAGE.md
Normal file
@@ -0,0 +1,30 @@
|
||||
Node does not cache queries to `dns.lookup`, which can lead application doing a lot of connections to have perf issues and to saturate Node threads pool.
|
||||
|
||||
This library attempts to mitigate these problems by providing a version of this function with a version short cache, applied on both errors and results.
|
||||
|
||||
> Limitation: `verbatim: false` option is not supported.
|
||||
|
||||
It has exactly the same API as the native method and can be used directly:
|
||||
|
||||
```js
|
||||
import { createCachedLookup } from '@vates/cached-dns.lookup'
|
||||
|
||||
const lookup = createCachedLookup()
|
||||
|
||||
lookup('example.net', { all: true, family: 0 }, (error, result) => {
|
||||
if (error != null) {
|
||||
return console.warn(error)
|
||||
}
|
||||
console.log(result)
|
||||
})
|
||||
```
|
||||
|
||||
Or it can be used to replace the native implementation and speed up the whole app:
|
||||
|
||||
```js
|
||||
// assign our cached implementation to dns.lookup
|
||||
const restore = createCachedLookup().patchGlobal()
|
||||
|
||||
// to restore the previous implementation
|
||||
restore()
|
||||
```
|
||||
1
@vates/cached-dns.lookup/.npmignore
Symbolic link
1
@vates/cached-dns.lookup/.npmignore
Symbolic link
@@ -0,0 +1 @@
|
||||
../../scripts/npmignore
|
||||
63
@vates/cached-dns.lookup/README.md
Normal file
63
@vates/cached-dns.lookup/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
<!-- DO NOT EDIT MANUALLY, THIS FILE HAS BEEN GENERATED -->
|
||||
|
||||
# @vates/cached-dns.lookup
|
||||
|
||||
[](https://npmjs.org/package/@vates/cached-dns.lookup)  [](https://bundlephobia.com/result?p=@vates/cached-dns.lookup) [](https://npmjs.org/package/@vates/cached-dns.lookup)
|
||||
|
||||
> Cached implementation of dns.lookup
|
||||
|
||||
## Install
|
||||
|
||||
Installation of the [npm package](https://npmjs.org/package/@vates/cached-dns.lookup):
|
||||
|
||||
```
|
||||
> npm install --save @vates/cached-dns.lookup
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Node does not cache queries to `dns.lookup`, which can lead application doing a lot of connections to have perf issues and to saturate Node threads pool.
|
||||
|
||||
This library attempts to mitigate these problems by providing a version of this function with a version short cache, applied on both errors and results.
|
||||
|
||||
> Limitation: `verbatim: false` option is not supported.
|
||||
|
||||
It has exactly the same API as the native method and can be used directly:
|
||||
|
||||
```js
|
||||
import { createCachedLookup } from '@vates/cached-dns.lookup'
|
||||
|
||||
const lookup = createCachedLookup()
|
||||
|
||||
lookup('example.net', { all: true, family: 0 }, (error, result) => {
|
||||
if (error != null) {
|
||||
return console.warn(error)
|
||||
}
|
||||
console.log(result)
|
||||
})
|
||||
```
|
||||
|
||||
Or it can be used to replace the native implementation and speed up the whole app:
|
||||
|
||||
```js
|
||||
// assign our cached implementation to dns.lookup
|
||||
const restore = createCachedLookup().patchGlobal()
|
||||
|
||||
// to restore the previous implementation
|
||||
restore()
|
||||
```
|
||||
|
||||
## Contributions
|
||||
|
||||
Contributions are _very_ welcomed, either on the documentation or on
|
||||
the code.
|
||||
|
||||
You may:
|
||||
|
||||
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
|
||||
you've encountered;
|
||||
- fork and create a pull request.
|
||||
|
||||
## License
|
||||
|
||||
[ISC](https://spdx.org/licenses/ISC) © [Vates SAS](https://vates.fr)
|
||||
72
@vates/cached-dns.lookup/index.js
Normal file
72
@vates/cached-dns.lookup/index.js
Normal file
@@ -0,0 +1,72 @@
|
||||
'use strict'
|
||||
|
||||
const assert = require('assert')
|
||||
const dns = require('dns')
|
||||
const LRU = require('lru-cache')
|
||||
|
||||
function reportResults(all, results, callback) {
|
||||
if (all) {
|
||||
callback(null, results)
|
||||
} else {
|
||||
const first = results[0]
|
||||
callback(null, first.address, first.family)
|
||||
}
|
||||
}
|
||||
|
||||
exports.createCachedLookup = function createCachedLookup({ lookup = dns.lookup } = {}) {
|
||||
const cache = new LRU({
|
||||
max: 500,
|
||||
|
||||
// 1 minute: long enough to be effective, short enough so there is no need to bother with DNS TTLs
|
||||
ttl: 60e3,
|
||||
})
|
||||
|
||||
function cachedLookup(hostname, options, callback) {
|
||||
let all = false
|
||||
let family = 0
|
||||
if (typeof options === 'function') {
|
||||
callback = options
|
||||
} else if (typeof options === 'number') {
|
||||
family = options
|
||||
} else if (options != null) {
|
||||
assert.notStrictEqual(options.verbatim, false, 'not supported by this implementation')
|
||||
;({ all = all, family = family } = options)
|
||||
}
|
||||
|
||||
// cache by family option because there will be an error if there is no
|
||||
// entries for the requestion family so we cannot easily cache all families
|
||||
// and filter on reporting back
|
||||
const key = hostname + '/' + family
|
||||
|
||||
const result = cache.get(key)
|
||||
if (result !== undefined) {
|
||||
setImmediate(reportResults, all, result, callback)
|
||||
} else {
|
||||
lookup(hostname, { all: true, family, verbatim: true }, function onLookup(error, results) {
|
||||
// errors are not cached because this will delay recovery after DNS/network issues
|
||||
//
|
||||
// there are no reliable way to detect if the error is real or simply
|
||||
// that there are no results for the requested hostname
|
||||
//
|
||||
// there should be much fewer errors than success, therefore it should
|
||||
// not be a big deal to not cache them
|
||||
if (error != null) {
|
||||
return callback(error)
|
||||
}
|
||||
|
||||
cache.set(key, results)
|
||||
reportResults(all, results, callback)
|
||||
})
|
||||
}
|
||||
}
|
||||
cachedLookup.patchGlobal = function patchGlobal() {
|
||||
const previous = dns.lookup
|
||||
dns.lookup = cachedLookup
|
||||
return function restoreGlobal() {
|
||||
assert.strictEqual(dns.lookup, cachedLookup)
|
||||
dns.lookup = previous
|
||||
}
|
||||
}
|
||||
|
||||
return cachedLookup
|
||||
}
|
||||
32
@vates/cached-dns.lookup/package.json
Normal file
32
@vates/cached-dns.lookup/package.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
},
|
||||
"dependencies": {
|
||||
"lru-cache": "^7.0.4"
|
||||
},
|
||||
"private": false,
|
||||
"name": "@vates/cached-dns.lookup",
|
||||
"description": "Cached implementation of dns.lookup",
|
||||
"keywords": [
|
||||
"cache",
|
||||
"dns",
|
||||
"lookup"
|
||||
],
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@vates/cached-dns.lookup",
|
||||
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
|
||||
"repository": {
|
||||
"directory": "@vates/cached-dns.lookup",
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"license": "ISC",
|
||||
"version": "1.0.0",
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public"
|
||||
}
|
||||
}
|
||||
50
@vates/event-listeners-manager/.USAGE.md
Normal file
50
@vates/event-listeners-manager/.USAGE.md
Normal file
@@ -0,0 +1,50 @@
|
||||
> This library is compatible with Node's `EventEmitter` and web browsers' `EventTarget` APIs.
|
||||
|
||||
### API
|
||||
|
||||
```js
|
||||
import { EventListenersManager } from '@vates/event-listeners-manager'
|
||||
|
||||
const events = new EventListenersManager(emitter)
|
||||
|
||||
// adding listeners
|
||||
events.add('foo', onFoo).add('bar', onBar).on('baz', onBaz)
|
||||
|
||||
// removing a specific listener
|
||||
events.remove('foo', onFoo)
|
||||
|
||||
// removing all listeners for a specific event
|
||||
events.removeAll('foo')
|
||||
|
||||
// removing all listeners
|
||||
events.removeAll()
|
||||
```
|
||||
|
||||
### Typical use case
|
||||
|
||||
> Removing all listeners when no longer necessary.
|
||||
|
||||
Manually:
|
||||
|
||||
```js
|
||||
const onFoo = () => {}
|
||||
const onBar = () => {}
|
||||
const onBaz = () => {}
|
||||
emitter.on('foo', onFoo).on('bar', onBar).on('baz', onBaz)
|
||||
|
||||
// CODE LOGIC
|
||||
|
||||
emitter.off('foo', onFoo).off('bar', onBar).off('baz', onBaz)
|
||||
```
|
||||
|
||||
With this library:
|
||||
|
||||
```js
|
||||
const events = new EventListenersManager(emitter)
|
||||
|
||||
events.add('foo', () => {})).add('bar', () => {})).add('baz', () => {}))
|
||||
|
||||
// CODE LOGIC
|
||||
|
||||
events.removeAll()
|
||||
```
|
||||
1
@vates/event-listeners-manager/.npmignore
Symbolic link
1
@vates/event-listeners-manager/.npmignore
Symbolic link
@@ -0,0 +1 @@
|
||||
../../scripts/npmignore
|
||||
81
@vates/event-listeners-manager/README.md
Normal file
81
@vates/event-listeners-manager/README.md
Normal file
@@ -0,0 +1,81 @@
|
||||
<!-- DO NOT EDIT MANUALLY, THIS FILE HAS BEEN GENERATED -->
|
||||
|
||||
# @vates/event-listeners-manager
|
||||
|
||||
[](https://npmjs.org/package/@vates/event-listeners-manager)  [](https://bundlephobia.com/result?p=@vates/event-listeners-manager) [](https://npmjs.org/package/@vates/event-listeners-manager)
|
||||
|
||||
## Install
|
||||
|
||||
Installation of the [npm package](https://npmjs.org/package/@vates/event-listeners-manager):
|
||||
|
||||
```
|
||||
> npm install --save @vates/event-listeners-manager
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
> This library is compatible with Node's `EventEmitter` and web browsers' `EventTarget` APIs.
|
||||
|
||||
### API
|
||||
|
||||
```js
|
||||
import { EventListenersManager } from '@vates/event-listeners-manager'
|
||||
|
||||
const events = new EventListenersManager(emitter)
|
||||
|
||||
// adding listeners
|
||||
events.add('foo', onFoo).add('bar', onBar).on('baz', onBaz)
|
||||
|
||||
// removing a specific listener
|
||||
events.remove('foo', onFoo)
|
||||
|
||||
// removing all listeners for a specific event
|
||||
events.removeAll('foo')
|
||||
|
||||
// removing all listeners
|
||||
events.removeAll()
|
||||
```
|
||||
|
||||
### Typical use case
|
||||
|
||||
> Removing all listeners when no longer necessary.
|
||||
|
||||
Manually:
|
||||
|
||||
```js
|
||||
const onFoo = () => {}
|
||||
const onBar = () => {}
|
||||
const onBaz = () => {}
|
||||
emitter.on('foo', onFoo).on('bar', onBar).on('baz', onBaz)
|
||||
|
||||
// CODE LOGIC
|
||||
|
||||
emitter.off('foo', onFoo).off('bar', onBar).off('baz', onBaz)
|
||||
```
|
||||
|
||||
With this library:
|
||||
|
||||
```js
|
||||
const events = new EventListenersManager(emitter)
|
||||
|
||||
events.add('foo', () => {})).add('bar', () => {})).add('baz', () => {}))
|
||||
|
||||
// CODE LOGIC
|
||||
|
||||
events.removeAll()
|
||||
```
|
||||
|
||||
## Contributions
|
||||
|
||||
Contributions are _very_ welcomed, either on the documentation or on
|
||||
the code.
|
||||
|
||||
You may:
|
||||
|
||||
- report any [issue](https://github.com/vatesfr/xen-orchestra/issues)
|
||||
you've encountered;
|
||||
- fork and create a pull request.
|
||||
|
||||
## License
|
||||
|
||||
[ISC](https://spdx.org/licenses/ISC) © [Vates SAS](https://vates.fr)
|
||||
56
@vates/event-listeners-manager/index.js
Normal file
56
@vates/event-listeners-manager/index.js
Normal file
@@ -0,0 +1,56 @@
|
||||
'use strict'
|
||||
|
||||
exports.EventListenersManager = class EventListenersManager {
|
||||
constructor(emitter) {
|
||||
this._listeners = new Map()
|
||||
|
||||
this._add = (emitter.addListener || emitter.addEventListener).bind(emitter)
|
||||
this._remove = (emitter.removeListener || emitter.removeEventListener).bind(emitter)
|
||||
}
|
||||
|
||||
add(type, listener) {
|
||||
let listeners = this._listeners.get(type)
|
||||
if (listeners === undefined) {
|
||||
listeners = new Set()
|
||||
this._listeners.set(type, listeners)
|
||||
}
|
||||
|
||||
// don't add the same listener multiple times (allowed on Node.js)
|
||||
if (!listeners.has(listener)) {
|
||||
listeners.add(listener)
|
||||
this._add(type, listener)
|
||||
}
|
||||
|
||||
return this
|
||||
}
|
||||
|
||||
remove(type, listener) {
|
||||
const allListeners = this._listeners
|
||||
const listeners = allListeners.get(type)
|
||||
if (listeners !== undefined && listeners.delete(listener)) {
|
||||
this._remove(type, listener)
|
||||
if (listeners.size === 0) {
|
||||
allListeners.delete(type)
|
||||
}
|
||||
}
|
||||
|
||||
return this
|
||||
}
|
||||
|
||||
removeAll(type) {
|
||||
const allListeners = this._listeners
|
||||
const remove = this._remove
|
||||
const types = type !== undefined ? [type] : allListeners.keys()
|
||||
for (const type of types) {
|
||||
const listeners = allListeners.get(type)
|
||||
if (listeners !== undefined) {
|
||||
allListeners.delete(type)
|
||||
for (const listener of listeners) {
|
||||
remove(type, listener)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return this
|
||||
}
|
||||
}
|
||||
67
@vates/event-listeners-manager/index.spec.js
Normal file
67
@vates/event-listeners-manager/index.spec.js
Normal file
@@ -0,0 +1,67 @@
|
||||
'use strict'
|
||||
|
||||
const t = require('tap')
|
||||
const { EventEmitter } = require('events')
|
||||
|
||||
const { EventListenersManager } = require('./')
|
||||
|
||||
const noop = Function.prototype
|
||||
|
||||
// function spy (impl = Function.prototype) {
|
||||
// function spy() {
|
||||
// spy.calls.push([Array.from(arguments), this])
|
||||
// }
|
||||
// spy.calls = []
|
||||
// return spy
|
||||
// }
|
||||
|
||||
function assertListeners(t, event, listeners) {
|
||||
t.strictSame(t.context.ee.listeners(event), listeners)
|
||||
}
|
||||
|
||||
t.beforeEach(function (t) {
|
||||
t.context.ee = new EventEmitter()
|
||||
t.context.em = new EventListenersManager(t.context.ee)
|
||||
})
|
||||
|
||||
t.test('.add adds a listener', function (t) {
|
||||
t.context.em.add('foo', noop)
|
||||
|
||||
assertListeners(t, 'foo', [noop])
|
||||
|
||||
t.end()
|
||||
})
|
||||
|
||||
t.test('.add does not add a duplicate listener', function (t) {
|
||||
t.context.em.add('foo', noop).add('foo', noop)
|
||||
|
||||
assertListeners(t, 'foo', [noop])
|
||||
|
||||
t.end()
|
||||
})
|
||||
|
||||
t.test('.remove removes a listener', function (t) {
|
||||
t.context.em.add('foo', noop).remove('foo', noop)
|
||||
|
||||
assertListeners(t, 'foo', [])
|
||||
|
||||
t.end()
|
||||
})
|
||||
|
||||
t.test('.removeAll removes all listeners of a given type', function (t) {
|
||||
t.context.em.add('foo', noop).add('bar', noop).removeAll('foo')
|
||||
|
||||
assertListeners(t, 'foo', [])
|
||||
assertListeners(t, 'bar', [noop])
|
||||
|
||||
t.end()
|
||||
})
|
||||
|
||||
t.test('.removeAll removes all listeners', function (t) {
|
||||
t.context.em.add('foo', noop).add('bar', noop).removeAll()
|
||||
|
||||
assertListeners(t, 'foo', [])
|
||||
assertListeners(t, 'bar', [])
|
||||
|
||||
t.end()
|
||||
})
|
||||
46
@vates/event-listeners-manager/package.json
Normal file
46
@vates/event-listeners-manager/package.json
Normal file
@@ -0,0 +1,46 @@
|
||||
{
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
},
|
||||
"private": false,
|
||||
"name": "@vates/event-listeners-manager",
|
||||
"descriptions": "Easy way to clean up event listeners",
|
||||
"keywords": [
|
||||
"add",
|
||||
"addEventListener",
|
||||
"addListener",
|
||||
"browser",
|
||||
"clear",
|
||||
"DOM",
|
||||
"emitter",
|
||||
"event",
|
||||
"EventEmitter",
|
||||
"EventTarget",
|
||||
"management",
|
||||
"manager",
|
||||
"node",
|
||||
"remove",
|
||||
"removeEventListener",
|
||||
"removeListener"
|
||||
],
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@vates/event-listeners-manager",
|
||||
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
|
||||
"repository": {
|
||||
"directory": "@vates/event-listeners-manager",
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"license": "ISC",
|
||||
"version": "1.0.1",
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public",
|
||||
"test": "tap --branches=72"
|
||||
},
|
||||
"devDependencies": {
|
||||
"tap": "^16.2.0"
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,9 @@
|
||||
### `readChunk(stream, [size])`
|
||||
|
||||
- returns the next available chunk of data
|
||||
- like `stream.read()`, a number of bytes can be specified
|
||||
- returns `null` if the stream has ended
|
||||
- returns with less data than expected if stream has ended
|
||||
- returns `null` if the stream has ended and no data has been read
|
||||
|
||||
```js
|
||||
import { readChunk } from '@vates/read-chunk'
|
||||
@@ -11,3 +14,13 @@ import { readChunk } from '@vates/read-chunk'
|
||||
}
|
||||
})()
|
||||
```
|
||||
|
||||
### `readChunkStrict(stream, [size])`
|
||||
|
||||
Similar behavior to `readChunk` but throws if the stream ended before the requested data could be read.
|
||||
|
||||
```js
|
||||
import { readChunkStrict } from '@vates/read-chunk'
|
||||
|
||||
const chunk = await readChunkStrict(stream, 1024)
|
||||
```
|
||||
|
||||
@@ -16,9 +16,12 @@ Installation of the [npm package](https://npmjs.org/package/@vates/read-chunk):
|
||||
|
||||
## Usage
|
||||
|
||||
### `readChunk(stream, [size])`
|
||||
|
||||
- returns the next available chunk of data
|
||||
- like `stream.read()`, a number of bytes can be specified
|
||||
- returns `null` if the stream has ended
|
||||
- returns with less data than expected if stream has ended
|
||||
- returns `null` if the stream has ended and no data has been read
|
||||
|
||||
```js
|
||||
import { readChunk } from '@vates/read-chunk'
|
||||
@@ -30,6 +33,16 @@ import { readChunk } from '@vates/read-chunk'
|
||||
})()
|
||||
```
|
||||
|
||||
### `readChunkStrict(stream, [size])`
|
||||
|
||||
Similar behavior to `readChunk` but throws if the stream ended before the requested data could be read.
|
||||
|
||||
```js
|
||||
import { readChunkStrict } from '@vates/read-chunk'
|
||||
|
||||
const chunk = await readChunkStrict(stream, 1024)
|
||||
```
|
||||
|
||||
## Contributions
|
||||
|
||||
Contributions are _very_ welcomed, either on the documentation or on
|
||||
|
||||
@@ -30,3 +30,22 @@ const readChunk = (stream, size) =>
|
||||
onReadable()
|
||||
})
|
||||
exports.readChunk = readChunk
|
||||
|
||||
exports.readChunkStrict = async function readChunkStrict(stream, size) {
|
||||
const chunk = await readChunk(stream, size)
|
||||
if (chunk === null) {
|
||||
throw new Error('stream has ended without data')
|
||||
}
|
||||
|
||||
if (size !== undefined && chunk.length !== size) {
|
||||
const error = new Error('stream has ended with not enough data')
|
||||
Object.defineProperties(error, {
|
||||
chunk: {
|
||||
value: chunk,
|
||||
},
|
||||
})
|
||||
throw error
|
||||
}
|
||||
|
||||
return chunk
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
const { Readable } = require('stream')
|
||||
|
||||
const { readChunk } = require('./')
|
||||
const { readChunk, readChunkStrict } = require('./')
|
||||
|
||||
const makeStream = it => Readable.from(it, { objectMode: false })
|
||||
makeStream.obj = Readable.from
|
||||
@@ -43,3 +43,27 @@ describe('readChunk', () => {
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
const rejectionOf = promise =>
|
||||
promise.then(
|
||||
value => {
|
||||
throw value
|
||||
},
|
||||
error => error
|
||||
)
|
||||
|
||||
describe('readChunkStrict', function () {
|
||||
it('throws if stream is empty', async () => {
|
||||
const error = await rejectionOf(readChunkStrict(makeStream([])))
|
||||
expect(error).toBeInstanceOf(Error)
|
||||
expect(error.message).toBe('stream has ended without data')
|
||||
expect(error.chunk).toEqual(undefined)
|
||||
})
|
||||
|
||||
it('throws if stream ends with not enough data', async () => {
|
||||
const error = await rejectionOf(readChunkStrict(makeStream(['foo', 'bar']), 10))
|
||||
expect(error).toBeInstanceOf(Error)
|
||||
expect(error.message).toBe('stream has ended with not enough data')
|
||||
expect(error.chunk).toEqual(Buffer.from('foobar'))
|
||||
})
|
||||
})
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"version": "0.1.2",
|
||||
"version": "1.0.0",
|
||||
"engines": {
|
||||
"node": ">=8.10"
|
||||
},
|
||||
|
||||
@@ -26,7 +26,13 @@ module.exports = async function main(args) {
|
||||
await asyncMap(_, async vmDir => {
|
||||
vmDir = resolve(vmDir)
|
||||
try {
|
||||
await adapter.cleanVm(vmDir, { fixMetadata: fix, remove, merge, onLog: (...args) => console.warn(...args) })
|
||||
await adapter.cleanVm(vmDir, {
|
||||
fixMetadata: fix,
|
||||
remove,
|
||||
merge,
|
||||
logInfo: (...args) => console.log(...args),
|
||||
logWarn: (...args) => console.warn(...args),
|
||||
})
|
||||
} catch (error) {
|
||||
console.error('adapter.cleanVm', vmDir, error)
|
||||
}
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
|
||||
"dependencies": {
|
||||
"@xen-orchestra/async-map": "^0.1.2",
|
||||
"@xen-orchestra/backups": "^0.21.0",
|
||||
"@xen-orchestra/fs": "^1.0.0",
|
||||
"@xen-orchestra/backups": "^0.27.4",
|
||||
"@xen-orchestra/fs": "^3.0.0",
|
||||
"filenamify": "^4.1.0",
|
||||
"getopts": "^2.2.5",
|
||||
"lodash": "^4.17.15",
|
||||
@@ -27,7 +27,7 @@
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public"
|
||||
},
|
||||
"version": "0.7.0",
|
||||
"version": "0.7.7",
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
|
||||
@@ -6,7 +6,7 @@ const ignoreErrors = require('promise-toolbox/ignoreErrors')
|
||||
const { compileTemplate } = require('@xen-orchestra/template')
|
||||
const { limitConcurrency } = require('limit-concurrency-decorator')
|
||||
|
||||
const { extractIdsFromSimplePattern } = require('./_extractIdsFromSimplePattern.js')
|
||||
const { extractIdsFromSimplePattern } = require('./extractIdsFromSimplePattern.js')
|
||||
const { PoolMetadataBackup } = require('./_PoolMetadataBackup.js')
|
||||
const { Task } = require('./Task.js')
|
||||
const { VmBackup } = require('./_VmBackup.js')
|
||||
@@ -24,6 +24,34 @@ const getAdaptersByRemote = adapters => {
|
||||
|
||||
const runTask = (...args) => Task.run(...args).catch(noop) // errors are handled by logs
|
||||
|
||||
const DEFAULT_SETTINGS = {
|
||||
reportWhen: 'failure',
|
||||
}
|
||||
|
||||
const DEFAULT_VM_SETTINGS = {
|
||||
bypassVdiChainsCheck: false,
|
||||
checkpointSnapshot: false,
|
||||
concurrency: 2,
|
||||
copyRetention: 0,
|
||||
deleteFirst: false,
|
||||
exportRetention: 0,
|
||||
fullInterval: 0,
|
||||
healthCheckSr: undefined,
|
||||
healthCheckVmsWithTags: [],
|
||||
maxMergedDeltasPerRun: 2,
|
||||
offlineBackup: false,
|
||||
offlineSnapshot: false,
|
||||
snapshotRetention: 0,
|
||||
timeout: 0,
|
||||
unconditionalSnapshot: false,
|
||||
vmTimeout: 0,
|
||||
}
|
||||
|
||||
const DEFAULT_METADATA_SETTINGS = {
|
||||
retentionPoolMetadata: 0,
|
||||
retentionXoMetadata: 0,
|
||||
}
|
||||
|
||||
exports.Backup = class Backup {
|
||||
constructor({ config, getAdapter, getConnectedRecord, job, schedule }) {
|
||||
this._config = config
|
||||
@@ -42,17 +70,22 @@ exports.Backup = class Backup {
|
||||
'{job.name}': job.name,
|
||||
'{vm.name_label}': vm => vm.name_label,
|
||||
})
|
||||
}
|
||||
|
||||
run() {
|
||||
const type = this._job.type
|
||||
const { type } = job
|
||||
const baseSettings = { ...DEFAULT_SETTINGS }
|
||||
if (type === 'backup') {
|
||||
return this._runVmBackup()
|
||||
Object.assign(baseSettings, DEFAULT_VM_SETTINGS, config.defaultSettings, config.vm?.defaultSettings)
|
||||
this.run = this._runVmBackup
|
||||
} else if (type === 'metadataBackup') {
|
||||
return this._runMetadataBackup()
|
||||
Object.assign(baseSettings, DEFAULT_METADATA_SETTINGS, config.defaultSettings, config.metadata?.defaultSettings)
|
||||
this.run = this._runMetadataBackup
|
||||
} else {
|
||||
throw new Error(`No runner for the backup type ${type}`)
|
||||
}
|
||||
Object.assign(baseSettings, job.settings[''])
|
||||
|
||||
this._baseSettings = baseSettings
|
||||
this._settings = { ...baseSettings, ...job.settings[schedule.id] }
|
||||
}
|
||||
|
||||
async _runMetadataBackup() {
|
||||
@@ -64,13 +97,6 @@ exports.Backup = class Backup {
|
||||
}
|
||||
|
||||
const config = this._config
|
||||
const settings = {
|
||||
...config.defaultSettings,
|
||||
...config.metadata.defaultSettings,
|
||||
...job.settings[''],
|
||||
...job.settings[schedule.id],
|
||||
}
|
||||
|
||||
const poolIds = extractIdsFromSimplePattern(job.pools)
|
||||
const isEmptyPools = poolIds.length === 0
|
||||
const isXoMetadata = job.xoMetadata !== undefined
|
||||
@@ -78,6 +104,8 @@ exports.Backup = class Backup {
|
||||
throw new Error('no metadata mode found')
|
||||
}
|
||||
|
||||
const settings = this._settings
|
||||
|
||||
const { retentionPoolMetadata, retentionXoMetadata } = settings
|
||||
|
||||
if (
|
||||
@@ -189,14 +217,7 @@ exports.Backup = class Backup {
|
||||
const schedule = this._schedule
|
||||
|
||||
const config = this._config
|
||||
const { settings } = job
|
||||
const scheduleSettings = {
|
||||
...config.defaultSettings,
|
||||
...config.vm.defaultSettings,
|
||||
...settings[''],
|
||||
...settings[schedule.id],
|
||||
}
|
||||
|
||||
const settings = this._settings
|
||||
await Disposable.use(
|
||||
Disposable.all(
|
||||
extractIdsFromSimplePattern(job.srs).map(id =>
|
||||
@@ -224,14 +245,15 @@ exports.Backup = class Backup {
|
||||
})
|
||||
)
|
||||
),
|
||||
async (srs, remoteAdapters) => {
|
||||
() => (settings.healthCheckSr !== undefined ? this._getRecord('SR', settings.healthCheckSr) : undefined),
|
||||
async (srs, remoteAdapters, healthCheckSr) => {
|
||||
// remove adapters that failed (already handled)
|
||||
remoteAdapters = remoteAdapters.filter(_ => _ !== undefined)
|
||||
|
||||
// remove srs that failed (already handled)
|
||||
srs = srs.filter(_ => _ !== undefined)
|
||||
|
||||
if (remoteAdapters.length === 0 && srs.length === 0 && scheduleSettings.snapshotRetention === 0) {
|
||||
if (remoteAdapters.length === 0 && srs.length === 0 && settings.snapshotRetention === 0) {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -241,23 +263,27 @@ exports.Backup = class Backup {
|
||||
|
||||
remoteAdapters = getAdaptersByRemote(remoteAdapters)
|
||||
|
||||
const allSettings = this._job.settings
|
||||
const baseSettings = this._baseSettings
|
||||
|
||||
const handleVm = vmUuid =>
|
||||
runTask({ name: 'backup VM', data: { type: 'VM', id: vmUuid } }, () =>
|
||||
Disposable.use(this._getRecord('VM', vmUuid), vm =>
|
||||
new VmBackup({
|
||||
baseSettings,
|
||||
config,
|
||||
getSnapshotNameLabel,
|
||||
healthCheckSr,
|
||||
job,
|
||||
// remotes,
|
||||
remoteAdapters,
|
||||
schedule,
|
||||
settings: { ...scheduleSettings, ...settings[vmUuid] },
|
||||
settings: { ...settings, ...allSettings[vm.uuid] },
|
||||
srs,
|
||||
vm,
|
||||
}).run()
|
||||
)
|
||||
)
|
||||
const { concurrency } = scheduleSettings
|
||||
const { concurrency } = settings
|
||||
await asyncMapSettled(vmIds, concurrency === 0 ? handleVm : limitConcurrency(concurrency)(handleVm))
|
||||
}
|
||||
)
|
||||
|
||||
64
@xen-orchestra/backups/HealthCheckVmBackup.js
Normal file
64
@xen-orchestra/backups/HealthCheckVmBackup.js
Normal file
@@ -0,0 +1,64 @@
|
||||
'use strict'
|
||||
|
||||
const { Task } = require('./Task')
|
||||
|
||||
exports.HealthCheckVmBackup = class HealthCheckVmBackup {
|
||||
#xapi
|
||||
#restoredVm
|
||||
|
||||
constructor({ restoredVm, xapi }) {
|
||||
this.#restoredVm = restoredVm
|
||||
this.#xapi = xapi
|
||||
}
|
||||
|
||||
async run() {
|
||||
return Task.run(
|
||||
{
|
||||
name: 'vmstart',
|
||||
},
|
||||
async () => {
|
||||
let restoredVm = this.#restoredVm
|
||||
const xapi = this.#xapi
|
||||
const restoredId = restoredVm.uuid
|
||||
|
||||
// remove vifs
|
||||
await Promise.all(restoredVm.$VIFs.map(vif => xapi.callAsync('VIF.destroy', vif.$ref)))
|
||||
|
||||
const start = new Date()
|
||||
// start Vm
|
||||
|
||||
await xapi.callAsync(
|
||||
'VM.start',
|
||||
restoredVm.$ref,
|
||||
false, // Start paused?
|
||||
false // Skip pre-boot checks?
|
||||
)
|
||||
const started = new Date()
|
||||
const timeout = 10 * 60 * 1000
|
||||
const startDuration = started - start
|
||||
|
||||
let remainingTimeout = timeout - startDuration
|
||||
|
||||
if (remainingTimeout < 0) {
|
||||
throw new Error(`VM ${restoredId} not started after ${timeout / 1000} second`)
|
||||
}
|
||||
|
||||
// wait for the 'Running' event to be really stored in local xapi object cache
|
||||
restoredVm = await xapi.waitObjectState(restoredVm.$ref, vm => vm.power_state === 'Running', {
|
||||
timeout: remainingTimeout,
|
||||
})
|
||||
|
||||
const running = new Date()
|
||||
remainingTimeout -= running - started
|
||||
|
||||
if (remainingTimeout < 0) {
|
||||
throw new Error(`local xapi did not get Runnig state for VM ${restoredId} after ${timeout / 1000} second`)
|
||||
}
|
||||
// wait for the guest tool version to be defined
|
||||
await xapi.waitObjectState(restoredVm.guest_metrics, gm => gm?.PV_drivers_version?.major !== undefined, {
|
||||
timeout: remainingTimeout,
|
||||
})
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
'use strict'
|
||||
|
||||
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
|
||||
const { synchronized } = require('decorator-synchronized')
|
||||
const Disposable = require('promise-toolbox/Disposable')
|
||||
const fromCallback = require('promise-toolbox/fromCallback')
|
||||
const fromEvent = require('promise-toolbox/fromEvent')
|
||||
@@ -9,14 +10,15 @@ const groupBy = require('lodash/groupBy.js')
|
||||
const pickBy = require('lodash/pickBy.js')
|
||||
const { dirname, join, normalize, resolve } = require('path')
|
||||
const { createLogger } = require('@xen-orchestra/log')
|
||||
const { Constants, createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } = require('vhd-lib')
|
||||
const { createVhdDirectoryFromStream, openVhd, VhdAbstract, VhdDirectory, VhdSynthetic } = require('vhd-lib')
|
||||
const { deduped } = require('@vates/disposable/deduped.js')
|
||||
const { decorateMethodsWith } = require('@vates/decorate-with')
|
||||
const { compose } = require('@vates/compose')
|
||||
const { execFile } = require('child_process')
|
||||
const { readdir, stat } = require('fs-extra')
|
||||
const { readdir, lstat } = require('fs-extra')
|
||||
const { v4: uuidv4 } = require('uuid')
|
||||
const { ZipFile } = require('yazl')
|
||||
const zlib = require('zlib')
|
||||
|
||||
const { BACKUP_DIR } = require('./_getVmBackupDir.js')
|
||||
const { cleanVm } = require('./_cleanVm.js')
|
||||
@@ -45,13 +47,12 @@ const resolveSubpath = (root, path) => resolve(root, `.${resolve('/', path)}`)
|
||||
const RE_VHDI = /^vhdi(\d+)$/
|
||||
|
||||
async function addDirectory(files, realPath, metadataPath) {
|
||||
try {
|
||||
const subFiles = await readdir(realPath)
|
||||
await asyncMap(subFiles, file => addDirectory(files, realPath + '/' + file, metadataPath + '/' + file))
|
||||
} catch (error) {
|
||||
if (error == null || error.code !== 'ENOTDIR') {
|
||||
throw error
|
||||
}
|
||||
const stats = await lstat(realPath)
|
||||
if (stats.isDirectory()) {
|
||||
await asyncMap(await readdir(realPath), file =>
|
||||
addDirectory(files, realPath + '/' + file, metadataPath + '/' + file)
|
||||
)
|
||||
} else if (stats.isFile()) {
|
||||
files.push({
|
||||
realPath,
|
||||
metadataPath,
|
||||
@@ -78,6 +79,7 @@ class RemoteAdapter {
|
||||
this._dirMode = dirMode
|
||||
this._handler = handler
|
||||
this._vhdDirectoryCompression = vhdDirectoryCompression
|
||||
this._readCacheListVmBackups = synchronized.withKey()(this._readCacheListVmBackups)
|
||||
}
|
||||
|
||||
get handler() {
|
||||
@@ -261,7 +263,8 @@ class RemoteAdapter {
|
||||
}
|
||||
|
||||
async deleteVmBackups(files) {
|
||||
const { delta, full, ...others } = groupBy(await asyncMap(files, file => this.readVmBackupMetadata(file)), 'mode')
|
||||
const metadatas = await asyncMap(files, file => this.readVmBackupMetadata(file))
|
||||
const { delta, full, ...others } = groupBy(metadatas, 'mode')
|
||||
|
||||
const unsupportedModes = Object.keys(others)
|
||||
if (unsupportedModes.length !== 0) {
|
||||
@@ -276,8 +279,11 @@ class RemoteAdapter {
|
||||
const dirs = new Set(files.map(file => dirname(file)))
|
||||
for (const dir of dirs) {
|
||||
// don't merge in main process, unused VHDs will be merged in the next backup run
|
||||
await this.cleanVm(dir, { remove: true, onLog: warn })
|
||||
await this.cleanVm(dir, { remove: true, logWarn: warn })
|
||||
}
|
||||
|
||||
const dedupedVmUuid = new Set(metadatas.map(_ => _.vm.uuid))
|
||||
await asyncMap(dedupedVmUuid, vmUuid => this.invalidateVmBackupListCache(vmUuid))
|
||||
}
|
||||
|
||||
#getCompressionType() {
|
||||
@@ -285,7 +291,7 @@ class RemoteAdapter {
|
||||
}
|
||||
|
||||
#useVhdDirectory() {
|
||||
return this.handler.type === 's3'
|
||||
return this.handler.useVhdDirectory()
|
||||
}
|
||||
|
||||
#useAlias() {
|
||||
@@ -376,8 +382,12 @@ class RemoteAdapter {
|
||||
const entriesMap = {}
|
||||
await asyncMap(await readdir(path), async name => {
|
||||
try {
|
||||
const stats = await stat(`${path}/${name}`)
|
||||
entriesMap[stats.isDirectory() ? `${name}/` : name] = {}
|
||||
const stats = await lstat(`${path}/${name}`)
|
||||
if (stats.isDirectory()) {
|
||||
entriesMap[name + '/'] = {}
|
||||
} else if (stats.isFile()) {
|
||||
entriesMap[name] = {}
|
||||
}
|
||||
} catch (error) {
|
||||
if (error == null || error.code !== 'ENOENT') {
|
||||
throw error
|
||||
@@ -448,34 +458,94 @@ class RemoteAdapter {
|
||||
return backupsByPool
|
||||
}
|
||||
|
||||
async listVmBackups(vmUuid, predicate) {
|
||||
async invalidateVmBackupListCache(vmUuid) {
|
||||
await this.handler.unlink(`${BACKUP_DIR}/${vmUuid}/cache.json.gz`)
|
||||
}
|
||||
|
||||
async #getCachabledDataListVmBackups(dir) {
|
||||
const handler = this._handler
|
||||
const backups = []
|
||||
const backups = {}
|
||||
|
||||
try {
|
||||
const files = await handler.list(`${BACKUP_DIR}/${vmUuid}`, {
|
||||
const files = await handler.list(dir, {
|
||||
filter: isMetadataFile,
|
||||
prependDir: true,
|
||||
})
|
||||
await asyncMap(files, async file => {
|
||||
try {
|
||||
const metadata = await this.readVmBackupMetadata(file)
|
||||
if (predicate === undefined || predicate(metadata)) {
|
||||
// inject an id usable by importVmBackupNg()
|
||||
metadata.id = metadata._filename
|
||||
|
||||
backups.push(metadata)
|
||||
}
|
||||
// inject an id usable by importVmBackupNg()
|
||||
metadata.id = metadata._filename
|
||||
backups[file] = metadata
|
||||
} catch (error) {
|
||||
warn(`listVmBackups ${file}`, { error })
|
||||
warn(`can't read vm backup metadata`, { error, file, dir })
|
||||
}
|
||||
})
|
||||
return backups
|
||||
} catch (error) {
|
||||
let code
|
||||
if (error == null || ((code = error.code) !== 'ENOENT' && code !== 'ENOTDIR')) {
|
||||
throw error
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// use _ to mark this method as private by convention
|
||||
// since we decorate it with synchronized.withKey in the constructor
|
||||
// and # function are not writeable.
|
||||
//
|
||||
// read the list of backup of a Vm from cache
|
||||
// if cache is missing or broken => regenerate it and return
|
||||
|
||||
async _readCacheListVmBackups(vmUuid) {
|
||||
const dir = `${BACKUP_DIR}/${vmUuid}`
|
||||
const path = `${dir}/cache.json.gz`
|
||||
|
||||
try {
|
||||
const gzipped = await this.handler.readFile(path)
|
||||
const text = await fromCallback(zlib.gunzip, gzipped)
|
||||
return JSON.parse(text)
|
||||
} catch (error) {
|
||||
if (error.code !== 'ENOENT') {
|
||||
warn('Cache file was unreadable', { vmUuid, error })
|
||||
}
|
||||
}
|
||||
|
||||
// nothing cached, or cache unreadable => regenerate it
|
||||
const backups = await this.#getCachabledDataListVmBackups(dir)
|
||||
if (backups === undefined) {
|
||||
return
|
||||
}
|
||||
|
||||
// detached async action, will not reject
|
||||
this.#writeVmBackupsCache(path, backups)
|
||||
|
||||
return backups
|
||||
}
|
||||
|
||||
async #writeVmBackupsCache(cacheFile, backups) {
|
||||
try {
|
||||
const text = JSON.stringify(backups)
|
||||
const zipped = await fromCallback(zlib.gzip, text)
|
||||
await this.handler.writeFile(cacheFile, zipped, { flags: 'w' })
|
||||
} catch (error) {
|
||||
warn('writeVmBackupsCache', { cacheFile, error })
|
||||
}
|
||||
}
|
||||
|
||||
async listVmBackups(vmUuid, predicate) {
|
||||
const backups = []
|
||||
const cached = await this._readCacheListVmBackups(vmUuid)
|
||||
|
||||
if (cached === undefined) {
|
||||
return []
|
||||
}
|
||||
|
||||
Object.values(cached).forEach(metadata => {
|
||||
if (predicate === undefined || predicate(metadata)) {
|
||||
backups.push(metadata)
|
||||
}
|
||||
})
|
||||
|
||||
return backups.sort(compareTimestamp)
|
||||
}
|
||||
@@ -531,46 +601,27 @@ class RemoteAdapter {
|
||||
})
|
||||
}
|
||||
|
||||
async _createSyntheticStream(handler, paths) {
|
||||
let disposableVhds = []
|
||||
|
||||
// if it's a path : open all hierarchy of parent
|
||||
if (typeof paths === 'string') {
|
||||
let vhd
|
||||
let vhdPath = paths
|
||||
do {
|
||||
const disposable = await openVhd(handler, vhdPath)
|
||||
vhd = disposable.value
|
||||
disposableVhds.push(disposable)
|
||||
vhdPath = resolveRelativeFromFile(vhdPath, vhd.header.parentUnicodeName)
|
||||
} while (vhd.footer.diskType !== Constants.DISK_TYPES.DYNAMIC)
|
||||
} else {
|
||||
// only open the list of path given
|
||||
disposableVhds = paths.map(path => openVhd(handler, path))
|
||||
}
|
||||
|
||||
// open the hierarchy of ancestors until we find a full one
|
||||
async _createSyntheticStream(handler, path) {
|
||||
const disposableSynthetic = await VhdSynthetic.fromVhdChain(handler, path)
|
||||
// I don't want the vhds to be disposed on return
|
||||
// but only when the stream is done ( or failed )
|
||||
const disposables = await Disposable.all(disposableVhds)
|
||||
const vhds = disposables.value
|
||||
|
||||
let disposed = false
|
||||
const disposeOnce = async () => {
|
||||
if (!disposed) {
|
||||
disposed = true
|
||||
|
||||
try {
|
||||
await disposables.dispose()
|
||||
await disposableSynthetic.dispose()
|
||||
} catch (error) {
|
||||
warn('_createSyntheticStream: failed to dispose VHDs', { error })
|
||||
warn('openVhd: failed to dispose VHDs', { error })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const synthetic = new VhdSynthetic(vhds)
|
||||
await synthetic.readHeaderAndFooter()
|
||||
const synthetic = disposableSynthetic.value
|
||||
await synthetic.readBlockAllocationTable()
|
||||
const stream = await synthetic.stream()
|
||||
|
||||
stream.on('end', disposeOnce)
|
||||
stream.on('close', disposeOnce)
|
||||
stream.on('error', disposeOnce)
|
||||
@@ -603,7 +654,10 @@ class RemoteAdapter {
|
||||
}
|
||||
|
||||
async readVmBackupMetadata(path) {
|
||||
return Object.defineProperty(JSON.parse(await this._handler.readFile(path)), '_filename', { value: path })
|
||||
// _filename is a private field used to compute the backup id
|
||||
//
|
||||
// it's enumerable to make it cacheable
|
||||
return { ...JSON.parse(await this._handler.readFile(path)), _filename: path }
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -3,8 +3,10 @@
|
||||
const CancelToken = require('promise-toolbox/CancelToken')
|
||||
const Zone = require('node-zone')
|
||||
|
||||
const logAfterEnd = () => {
|
||||
throw new Error('task has already ended')
|
||||
const logAfterEnd = log => {
|
||||
const error = new Error('task has already ended')
|
||||
error.log = log
|
||||
throw error
|
||||
}
|
||||
|
||||
const noop = Function.prototype
|
||||
|
||||
@@ -45,7 +45,18 @@ const forkDeltaExport = deltaExport =>
|
||||
})
|
||||
|
||||
class VmBackup {
|
||||
constructor({ config, getSnapshotNameLabel, job, remoteAdapters, remotes, schedule, settings, srs, vm }) {
|
||||
constructor({
|
||||
config,
|
||||
getSnapshotNameLabel,
|
||||
healthCheckSr,
|
||||
job,
|
||||
remoteAdapters,
|
||||
remotes,
|
||||
schedule,
|
||||
settings,
|
||||
srs,
|
||||
vm,
|
||||
}) {
|
||||
if (vm.other_config['xo:backup:job'] === job.id && 'start' in vm.blocked_operations) {
|
||||
// don't match replicated VMs created by this very job otherwise they
|
||||
// will be replicated again and again
|
||||
@@ -55,7 +66,6 @@ class VmBackup {
|
||||
this.config = config
|
||||
this.job = job
|
||||
this.remoteAdapters = remoteAdapters
|
||||
this.remotes = remotes
|
||||
this.scheduleId = schedule.id
|
||||
this.timestamp = undefined
|
||||
|
||||
@@ -69,6 +79,7 @@ class VmBackup {
|
||||
this._fullVdisRequired = undefined
|
||||
this._getSnapshotNameLabel = getSnapshotNameLabel
|
||||
this._isDelta = job.mode === 'delta'
|
||||
this._healthCheckSr = healthCheckSr
|
||||
this._jobId = job.id
|
||||
this._jobSnapshots = undefined
|
||||
this._xapi = vm.$xapi
|
||||
@@ -95,7 +106,6 @@ class VmBackup {
|
||||
: [FullBackupWriter, FullReplicationWriter]
|
||||
|
||||
const allSettings = job.settings
|
||||
|
||||
Object.keys(remoteAdapters).forEach(remoteId => {
|
||||
const targetSettings = {
|
||||
...settings,
|
||||
@@ -118,35 +128,49 @@ class VmBackup {
|
||||
}
|
||||
|
||||
// calls fn for each function, warns of any errors, and throws only if there are no writers left
|
||||
async _callWriters(fn, warnMessage, parallel = true) {
|
||||
async _callWriters(fn, step, parallel = true) {
|
||||
const writers = this._writers
|
||||
const n = writers.size
|
||||
if (n === 0) {
|
||||
return
|
||||
}
|
||||
if (n === 1) {
|
||||
const [writer] = writers
|
||||
|
||||
async function callWriter(writer) {
|
||||
const { name } = writer.constructor
|
||||
try {
|
||||
debug('writer step starting', { step, writer: name })
|
||||
await fn(writer)
|
||||
debug('writer step succeeded', { duration: step, writer: name })
|
||||
} catch (error) {
|
||||
writers.delete(writer)
|
||||
|
||||
warn('writer step failed', { error, step, writer: name })
|
||||
|
||||
// these two steps are the only one that are not already in their own sub tasks
|
||||
if (step === 'writer.checkBaseVdis()' || step === 'writer.beforeBackup()') {
|
||||
Task.warning(
|
||||
`the writer ${name} has failed the step ${step} with error ${error.message}. It won't be used anymore in this job execution.`
|
||||
)
|
||||
}
|
||||
|
||||
throw error
|
||||
}
|
||||
return
|
||||
}
|
||||
if (n === 1) {
|
||||
const [writer] = writers
|
||||
return callWriter(writer)
|
||||
}
|
||||
|
||||
const errors = []
|
||||
await (parallel ? asyncMap : asyncEach)(writers, async function (writer) {
|
||||
try {
|
||||
await fn(writer)
|
||||
await callWriter(writer)
|
||||
} catch (error) {
|
||||
errors.push(error)
|
||||
this.delete(writer)
|
||||
warn(warnMessage, { error, writer: writer.constructor.name })
|
||||
}
|
||||
})
|
||||
if (writers.size === 0) {
|
||||
throw new AggregateError(errors, 'all targets have failed, step: ' + warnMessage)
|
||||
throw new AggregateError(errors, 'all targets have failed, step: ' + step)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,7 +197,10 @@ class VmBackup {
|
||||
const settings = this._settings
|
||||
|
||||
const doSnapshot =
|
||||
this._isDelta || (!settings.offlineBackup && vm.power_state === 'Running') || settings.snapshotRetention !== 0
|
||||
settings.unconditionalSnapshot ||
|
||||
this._isDelta ||
|
||||
(!settings.offlineBackup && vm.power_state === 'Running') ||
|
||||
settings.snapshotRetention !== 0
|
||||
if (doSnapshot) {
|
||||
await Task.run({ name: 'snapshot' }, async () => {
|
||||
if (!settings.bypassVdiChainsCheck) {
|
||||
@@ -181,7 +208,9 @@ class VmBackup {
|
||||
}
|
||||
|
||||
const snapshotRef = await vm[settings.checkpointSnapshot ? '$checkpoint' : '$snapshot']({
|
||||
ignoreNobakVdis: true,
|
||||
name_label: this._getSnapshotNameLabel(vm),
|
||||
unplugVusbs: true,
|
||||
})
|
||||
this.timestamp = Date.now()
|
||||
|
||||
@@ -303,22 +332,17 @@ class VmBackup {
|
||||
}
|
||||
|
||||
async _removeUnusedSnapshots() {
|
||||
const jobSettings = this.job.settings
|
||||
const allSettings = this.job.settings
|
||||
const baseSettings = this._baseSettings
|
||||
const baseVmRef = this._baseVm?.$ref
|
||||
const { config } = this
|
||||
const baseSettings = {
|
||||
...config.defaultSettings,
|
||||
...config.metadata.defaultSettings,
|
||||
...jobSettings[''],
|
||||
}
|
||||
|
||||
const snapshotsPerSchedule = groupBy(this._jobSnapshots, _ => _.other_config['xo:backup:schedule'])
|
||||
const xapi = this._xapi
|
||||
await asyncMap(Object.entries(snapshotsPerSchedule), ([scheduleId, snapshots]) => {
|
||||
const settings = {
|
||||
...baseSettings,
|
||||
...jobSettings[scheduleId],
|
||||
...jobSettings[this.vm.uuid],
|
||||
...allSettings[scheduleId],
|
||||
...allSettings[this.vm.uuid],
|
||||
}
|
||||
return asyncMap(getOldEntries(settings.snapshotRetention, snapshots), ({ $ref }) => {
|
||||
if ($ref !== baseVmRef) {
|
||||
@@ -397,6 +421,24 @@ class VmBackup {
|
||||
this._fullVdisRequired = fullVdisRequired
|
||||
}
|
||||
|
||||
async _healthCheck() {
|
||||
const settings = this._settings
|
||||
|
||||
if (this._healthCheckSr === undefined) {
|
||||
return
|
||||
}
|
||||
|
||||
// check if current VM has tags
|
||||
const { tags } = this.vm
|
||||
const intersect = settings.healthCheckVmsWithTags.some(t => tags.includes(t))
|
||||
|
||||
if (settings.healthCheckVmsWithTags.length !== 0 && !intersect) {
|
||||
return
|
||||
}
|
||||
|
||||
await this._callWriters(writer => writer.healthCheck(this._healthCheckSr), 'writer.healthCheck()')
|
||||
}
|
||||
|
||||
async run($defer) {
|
||||
const settings = this._settings
|
||||
assert(
|
||||
@@ -406,7 +448,9 @@ class VmBackup {
|
||||
|
||||
await this._callWriters(async writer => {
|
||||
await writer.beforeBackup()
|
||||
$defer(() => writer.afterBackup())
|
||||
$defer(async () => {
|
||||
await writer.afterBackup()
|
||||
})
|
||||
}, 'writer.beforeBackup()')
|
||||
|
||||
await this._fetchJobSnapshots()
|
||||
@@ -442,6 +486,7 @@ class VmBackup {
|
||||
await this._fetchJobSnapshots()
|
||||
await this._removeUnusedSnapshots()
|
||||
}
|
||||
await this._healthCheck()
|
||||
}
|
||||
}
|
||||
exports.VmBackup = VmBackup
|
||||
|
||||
@@ -4,6 +4,8 @@ require('@xen-orchestra/log/configure.js').catchGlobalErrors(
|
||||
require('@xen-orchestra/log').createLogger('xo:backups:worker')
|
||||
)
|
||||
|
||||
require('@vates/cached-dns.lookup').createCachedLookup().patchGlobal()
|
||||
|
||||
const Disposable = require('promise-toolbox/Disposable')
|
||||
const ignoreErrors = require('promise-toolbox/ignoreErrors')
|
||||
const { compose } = require('@vates/compose')
|
||||
|
||||
@@ -5,9 +5,9 @@
|
||||
const rimraf = require('rimraf')
|
||||
const tmp = require('tmp')
|
||||
const fs = require('fs-extra')
|
||||
const uuid = require('uuid')
|
||||
const { getHandler } = require('@xen-orchestra/fs')
|
||||
const { pFromCallback } = require('promise-toolbox')
|
||||
const crypto = require('crypto')
|
||||
const { RemoteAdapter } = require('./RemoteAdapter')
|
||||
const { VHDFOOTER, VHDHEADER } = require('./tests.fixtures.js')
|
||||
const { VhdFile, Constants, VhdDirectory, VhdAbstract } = require('vhd-lib')
|
||||
@@ -34,7 +34,8 @@ afterEach(async () => {
|
||||
await handler.forget()
|
||||
})
|
||||
|
||||
const uniqueId = () => crypto.randomBytes(16).toString('hex')
|
||||
const uniqueId = () => uuid.v1()
|
||||
const uniqueIdBuffer = () => uuid.v1({}, Buffer.alloc(16))
|
||||
|
||||
async function generateVhd(path, opts = {}) {
|
||||
let vhd
|
||||
@@ -53,10 +54,9 @@ async function generateVhd(path, opts = {}) {
|
||||
}
|
||||
|
||||
vhd.header = { ...VHDHEADER, ...opts.header }
|
||||
vhd.footer = { ...VHDFOOTER, ...opts.footer }
|
||||
vhd.footer.uuid = Buffer.from(crypto.randomBytes(16))
|
||||
vhd.footer = { ...VHDFOOTER, ...opts.footer, uuid: uniqueIdBuffer() }
|
||||
|
||||
if (vhd.header.parentUnicodeName) {
|
||||
if (vhd.header.parentUuid) {
|
||||
vhd.footer.diskType = Constants.DISK_TYPES.DIFFERENCING
|
||||
} else {
|
||||
vhd.footer.diskType = Constants.DISK_TYPES.DYNAMIC
|
||||
@@ -78,48 +78,53 @@ test('It remove broken vhd', async () => {
|
||||
await handler.writeFile(`${basePath}/notReallyAVhd.vhd`, 'I AM NOT A VHD')
|
||||
expect((await handler.list(basePath)).length).toEqual(1)
|
||||
let loggued = ''
|
||||
const onLog = message => {
|
||||
const logInfo = message => {
|
||||
loggued += message
|
||||
}
|
||||
await adapter.cleanVm('/', { remove: false, onLog })
|
||||
expect(loggued).toEqual(`error while checking the VHD with path /${basePath}/notReallyAVhd.vhd`)
|
||||
await adapter.cleanVm('/', { remove: false, logInfo, logWarn: logInfo, lock: false })
|
||||
expect(loggued).toEqual(`VHD check error`)
|
||||
// not removed
|
||||
expect((await handler.list(basePath)).length).toEqual(1)
|
||||
// really remove it
|
||||
await adapter.cleanVm('/', { remove: true, onLog })
|
||||
await adapter.cleanVm('/', { remove: true, logInfo, logWarn: () => {}, lock: false })
|
||||
expect((await handler.list(basePath)).length).toEqual(0)
|
||||
})
|
||||
|
||||
test('it remove vhd with missing or multiple ancestors', async () => {
|
||||
// one with a broken parent
|
||||
// one with a broken parent, should be deleted
|
||||
await generateVhd(`${basePath}/abandonned.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'gone.vhd',
|
||||
parentUid: Buffer.from(crypto.randomBytes(16)),
|
||||
parentUuid: uniqueIdBuffer(),
|
||||
},
|
||||
})
|
||||
|
||||
// one orphan, which is a full vhd, no parent
|
||||
// one orphan, which is a full vhd, no parent : should stay
|
||||
const orphan = await generateVhd(`${basePath}/orphan.vhd`)
|
||||
// a child to the orphan
|
||||
// a child to the orphan in the metadata : should stay
|
||||
await generateVhd(`${basePath}/child.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'orphan.vhd',
|
||||
parentUid: orphan.footer.uuid,
|
||||
parentUuid: orphan.footer.uuid,
|
||||
},
|
||||
})
|
||||
|
||||
await handler.writeFile(
|
||||
`metadata.json`,
|
||||
JSON.stringify({
|
||||
mode: 'delta',
|
||||
vhds: [`${basePath}/child.vhd`, `${basePath}/abandonned.vhd`],
|
||||
}),
|
||||
{ flags: 'w' }
|
||||
)
|
||||
// clean
|
||||
let loggued = ''
|
||||
const onLog = message => {
|
||||
const logInfo = message => {
|
||||
loggued += message + '\n'
|
||||
}
|
||||
await adapter.cleanVm('/', { remove: true, onLog })
|
||||
await adapter.cleanVm('/', { remove: true, logInfo, logWarn: logInfo, lock: false })
|
||||
|
||||
const deletedOrphanVhd = loggued.match(/deleting orphan VHD/g) || []
|
||||
expect(deletedOrphanVhd.length).toEqual(1) // only one vhd should have been deleted
|
||||
const deletedAbandonnedVhd = loggued.match(/abandonned.vhd is missing/g) || []
|
||||
expect(deletedAbandonnedVhd.length).toEqual(1) // and it must be abandonned.vhd
|
||||
|
||||
// we don't test the filew on disk, since they will all be marker as unused and deleted without a metadata.json file
|
||||
})
|
||||
@@ -147,19 +152,17 @@ test('it remove backup meta data referencing a missing vhd in delta backup', asy
|
||||
await generateVhd(`${basePath}/child.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'orphan.vhd',
|
||||
parentUid: orphan.footer.uuid,
|
||||
parentUuid: orphan.footer.uuid,
|
||||
},
|
||||
})
|
||||
|
||||
let loggued = ''
|
||||
const onLog = message => {
|
||||
const logInfo = message => {
|
||||
loggued += message + '\n'
|
||||
}
|
||||
await adapter.cleanVm('/', { remove: true, onLog })
|
||||
let matched = loggued.match(/deleting unused VHD /g) || []
|
||||
await adapter.cleanVm('/', { remove: true, logInfo, logWarn: logInfo, lock: false })
|
||||
let matched = loggued.match(/deleting unused VHD/g) || []
|
||||
expect(matched.length).toEqual(1) // only one vhd should have been deleted
|
||||
matched = loggued.match(/abandonned.vhd is unused/g) || []
|
||||
expect(matched.length).toEqual(1) // and it must be abandonned.vhd
|
||||
|
||||
// a missing vhd cause clean to remove all vhds
|
||||
await handler.writeFile(
|
||||
@@ -176,8 +179,8 @@ test('it remove backup meta data referencing a missing vhd in delta backup', asy
|
||||
{ flags: 'w' }
|
||||
)
|
||||
loggued = ''
|
||||
await adapter.cleanVm('/', { remove: true, onLog })
|
||||
matched = loggued.match(/deleting unused VHD /g) || []
|
||||
await adapter.cleanVm('/', { remove: true, logInfo, logWarn: () => {}, lock: false })
|
||||
matched = loggued.match(/deleting unused VHD/g) || []
|
||||
expect(matched.length).toEqual(2) // all vhds (orphan and child ) should have been deleted
|
||||
})
|
||||
|
||||
@@ -201,30 +204,28 @@ test('it merges delta of non destroyed chain', async () => {
|
||||
const child = await generateVhd(`${basePath}/child.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'orphan.vhd',
|
||||
parentUid: orphan.footer.uuid,
|
||||
parentUuid: orphan.footer.uuid,
|
||||
},
|
||||
})
|
||||
// a grand child
|
||||
await generateVhd(`${basePath}/grandchild.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'child.vhd',
|
||||
parentUid: child.footer.uuid,
|
||||
parentUuid: child.footer.uuid,
|
||||
},
|
||||
})
|
||||
|
||||
let loggued = []
|
||||
const onLog = message => {
|
||||
const logInfo = message => {
|
||||
loggued.push(message)
|
||||
}
|
||||
await adapter.cleanVm('/', { remove: true, onLog })
|
||||
expect(loggued[0]).toEqual(`the parent /${basePath}/orphan.vhd of the child /${basePath}/child.vhd is unused`)
|
||||
expect(loggued[1]).toEqual(`incorrect size in metadata: 12000 instead of 209920`)
|
||||
await adapter.cleanVm('/', { remove: true, logInfo, logWarn: logInfo, lock: false })
|
||||
expect(loggued[0]).toEqual(`incorrect backup size in metadata`)
|
||||
|
||||
loggued = []
|
||||
await adapter.cleanVm('/', { remove: true, merge: true, onLog })
|
||||
const [unused, merging] = loggued
|
||||
expect(unused).toEqual(`the parent /${basePath}/orphan.vhd of the child /${basePath}/child.vhd is unused`)
|
||||
expect(merging).toEqual(`merging /${basePath}/child.vhd into /${basePath}/orphan.vhd`)
|
||||
await adapter.cleanVm('/', { remove: true, merge: true, logInfo, logWarn: () => {}, lock: false })
|
||||
const [merging] = loggued
|
||||
expect(merging).toEqual(`merging VHD chain`)
|
||||
|
||||
const metadata = JSON.parse(await handler.readFile(`metadata.json`))
|
||||
// size should be the size of children + grand children after the merge
|
||||
@@ -254,7 +255,7 @@ test('it finish unterminated merge ', async () => {
|
||||
const child = await generateVhd(`${basePath}/child.vhd`, {
|
||||
header: {
|
||||
parentUnicodeName: 'orphan.vhd',
|
||||
parentUid: orphan.footer.uuid,
|
||||
parentUuid: orphan.footer.uuid,
|
||||
},
|
||||
})
|
||||
// a merge in progress file
|
||||
@@ -270,7 +271,7 @@ test('it finish unterminated merge ', async () => {
|
||||
})
|
||||
)
|
||||
|
||||
await adapter.cleanVm('/', { remove: true, merge: true })
|
||||
await adapter.cleanVm('/', { remove: true, merge: true, logWarn: () => {}, lock: false })
|
||||
// merging is already tested in vhd-lib, don't retest it here (and theses vhd are as empty as my stomach at 12h12)
|
||||
|
||||
// only check deletion
|
||||
@@ -310,7 +311,7 @@ describe('tests multiple combination ', () => {
|
||||
mode: vhdMode,
|
||||
header: {
|
||||
parentUnicodeName: 'gone.vhd',
|
||||
parentUid: crypto.randomBytes(16),
|
||||
parentUuid: uniqueIdBuffer(),
|
||||
},
|
||||
})
|
||||
|
||||
@@ -324,7 +325,7 @@ describe('tests multiple combination ', () => {
|
||||
mode: vhdMode,
|
||||
header: {
|
||||
parentUnicodeName: 'ancestor.vhd' + (useAlias ? '.alias.vhd' : ''),
|
||||
parentUid: ancestor.footer.uuid,
|
||||
parentUuid: ancestor.footer.uuid,
|
||||
},
|
||||
})
|
||||
// a grand child vhd in metadata
|
||||
@@ -333,7 +334,7 @@ describe('tests multiple combination ', () => {
|
||||
mode: vhdMode,
|
||||
header: {
|
||||
parentUnicodeName: 'child.vhd' + (useAlias ? '.alias.vhd' : ''),
|
||||
parentUid: child.footer.uuid,
|
||||
parentUuid: child.footer.uuid,
|
||||
},
|
||||
})
|
||||
|
||||
@@ -348,7 +349,7 @@ describe('tests multiple combination ', () => {
|
||||
mode: vhdMode,
|
||||
header: {
|
||||
parentUnicodeName: 'cleanAncestor.vhd' + (useAlias ? '.alias.vhd' : ''),
|
||||
parentUid: cleanAncestor.footer.uuid,
|
||||
parentUuid: cleanAncestor.footer.uuid,
|
||||
},
|
||||
})
|
||||
|
||||
@@ -377,7 +378,7 @@ describe('tests multiple combination ', () => {
|
||||
})
|
||||
)
|
||||
|
||||
await adapter.cleanVm('/', { remove: true, merge: true })
|
||||
await adapter.cleanVm('/', { remove: true, merge: true, logWarn: () => {}, lock: false })
|
||||
|
||||
const metadata = JSON.parse(await handler.readFile(`metadata.json`))
|
||||
// size should be the size of children + grand children + clean after the merge
|
||||
@@ -413,7 +414,7 @@ describe('tests multiple combination ', () => {
|
||||
test('it cleans orphan merge states ', async () => {
|
||||
await handler.writeFile(`${basePath}/.orphan.vhd.merge.json`, '')
|
||||
|
||||
await adapter.cleanVm('/', { remove: true })
|
||||
await adapter.cleanVm('/', { remove: true, logWarn: () => {}, lock: false })
|
||||
|
||||
expect(await handler.list(basePath)).toEqual([])
|
||||
})
|
||||
@@ -428,7 +429,11 @@ test('check Aliases should work alone', async () => {
|
||||
|
||||
await generateVhd(`vhds/data/missingalias.vhd`)
|
||||
|
||||
await checkAliases(['vhds/missingData.alias.vhd', 'vhds/ok.alias.vhd'], 'vhds/data', { remove: true, handler })
|
||||
await checkAliases(['vhds/missingData.alias.vhd', 'vhds/ok.alias.vhd'], 'vhds/data', {
|
||||
remove: true,
|
||||
handler,
|
||||
logWarn: () => {},
|
||||
})
|
||||
|
||||
// only ok have suvived
|
||||
const alias = (await handler.list('vhds')).filter(f => f.endsWith('.vhd'))
|
||||
|
||||
@@ -1,22 +1,27 @@
|
||||
'use strict'
|
||||
|
||||
const assert = require('assert')
|
||||
const sum = require('lodash/sum')
|
||||
const UUID = require('uuid')
|
||||
const { asyncMap } = require('@xen-orchestra/async-map')
|
||||
const { Constants, mergeVhd, openVhd, VhdAbstract, VhdFile } = require('vhd-lib')
|
||||
const { Constants, openVhd, VhdAbstract, VhdFile } = require('vhd-lib')
|
||||
const { isVhdAlias, resolveVhdAlias } = require('vhd-lib/aliases')
|
||||
const { dirname, resolve } = require('path')
|
||||
const { DISK_TYPES } = Constants
|
||||
const { isMetadataFile, isVhdFile, isXvaFile, isXvaSumFile } = require('./_backupType.js')
|
||||
const { limitConcurrency } = require('limit-concurrency-decorator')
|
||||
const { mergeVhdChain } = require('vhd-lib/merge')
|
||||
|
||||
const { Task } = require('./Task.js')
|
||||
const { Disposable } = require('promise-toolbox')
|
||||
const handlerPath = require('@xen-orchestra/fs/path')
|
||||
|
||||
// checking the size of a vhd directory is costly
|
||||
// 1 Http Query per 1000 blocks
|
||||
// we only check size of all the vhd are VhdFiles
|
||||
function shouldComputeVhdsSize(vhds) {
|
||||
function shouldComputeVhdsSize(handler, vhds) {
|
||||
if (handler.isEncrypted) {
|
||||
return false
|
||||
}
|
||||
return vhds.every(vhd => vhd instanceof VhdFile)
|
||||
}
|
||||
|
||||
@@ -24,86 +29,48 @@ const computeVhdsSize = (handler, vhdPaths) =>
|
||||
Disposable.use(
|
||||
vhdPaths.map(vhdPath => openVhd(handler, vhdPath)),
|
||||
async vhds => {
|
||||
if (shouldComputeVhdsSize(vhds)) {
|
||||
if (shouldComputeVhdsSize(handler, vhds)) {
|
||||
const sizes = await asyncMap(vhds, vhd => vhd.getSize())
|
||||
return sum(sizes)
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
// chain is an array of VHDs from child to parent
|
||||
//
|
||||
// the whole chain will be merged into parent, parent will be renamed to child
|
||||
// and all the others will deleted
|
||||
async function mergeVhdChain(chain, { handler, onLog, remove, merge }) {
|
||||
assert(chain.length >= 2)
|
||||
|
||||
let child = chain[0]
|
||||
const parent = chain[chain.length - 1]
|
||||
const children = chain.slice(0, -1).reverse()
|
||||
|
||||
chain
|
||||
.slice(1)
|
||||
.reverse()
|
||||
.forEach(parent => {
|
||||
onLog(`the parent ${parent} of the child ${child} is unused`)
|
||||
})
|
||||
|
||||
// chain is [ ancestor, child_1, ..., child_n ]
|
||||
async function _mergeVhdChain(handler, chain, { logInfo, remove, merge }) {
|
||||
if (merge) {
|
||||
// `mergeVhd` does not work with a stream, either
|
||||
// - make it accept a stream
|
||||
// - or create synthetic VHD which is not a stream
|
||||
if (children.length !== 1) {
|
||||
// TODO: implement merging multiple children
|
||||
children.length = 1
|
||||
child = children[0]
|
||||
}
|
||||
|
||||
onLog(`merging ${child} into ${parent}`)
|
||||
logInfo(`merging VHD chain`, { chain })
|
||||
|
||||
let done, total
|
||||
const handle = setInterval(() => {
|
||||
if (done !== undefined) {
|
||||
onLog(`merging ${child}: ${done}/${total}`)
|
||||
logInfo('merge in progress', {
|
||||
done,
|
||||
parent: chain[0],
|
||||
progress: Math.round((100 * done) / total),
|
||||
total,
|
||||
})
|
||||
}
|
||||
}, 10e3)
|
||||
|
||||
const mergedSize = await mergeVhd(
|
||||
handler,
|
||||
parent,
|
||||
handler,
|
||||
child,
|
||||
// children.length === 1
|
||||
// ? child
|
||||
// : await createSyntheticStream(handler, children),
|
||||
{
|
||||
try {
|
||||
return await mergeVhdChain(handler, chain, {
|
||||
logInfo,
|
||||
onProgress({ done: d, total: t }) {
|
||||
done = d
|
||||
total = t
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
clearInterval(handle)
|
||||
await Promise.all([
|
||||
VhdAbstract.rename(handler, parent, child),
|
||||
asyncMap(children.slice(0, -1), child => {
|
||||
onLog(`the VHD ${child} is unused`)
|
||||
if (remove) {
|
||||
onLog(`deleting unused VHD ${child}`)
|
||||
return VhdAbstract.unlink(handler, child)
|
||||
}
|
||||
}),
|
||||
])
|
||||
|
||||
return mergedSize
|
||||
removeUnused: remove,
|
||||
})
|
||||
} finally {
|
||||
clearInterval(handle)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const noop = Function.prototype
|
||||
|
||||
const INTERRUPTED_VHDS_REG = /^\.(.+)\.merge.json$/
|
||||
const listVhds = async (handler, vmDir) => {
|
||||
const listVhds = async (handler, vmDir, logWarn) => {
|
||||
const vhds = new Set()
|
||||
const aliases = {}
|
||||
const interruptedVhds = new Map()
|
||||
@@ -123,12 +90,23 @@ const listVhds = async (handler, vmDir) => {
|
||||
filter: file => isVhdFile(file) || INTERRUPTED_VHDS_REG.test(file),
|
||||
})
|
||||
aliases[vdiDir] = list.filter(vhd => isVhdAlias(vhd)).map(file => `${vdiDir}/${file}`)
|
||||
list.forEach(file => {
|
||||
|
||||
await asyncMap(list, async file => {
|
||||
const res = INTERRUPTED_VHDS_REG.exec(file)
|
||||
if (res === null) {
|
||||
vhds.add(`${vdiDir}/${file}`)
|
||||
} else {
|
||||
interruptedVhds.set(`${vdiDir}/${res[1]}`, `${vdiDir}/${file}`)
|
||||
try {
|
||||
const mergeState = JSON.parse(await handler.readFile(`${vdiDir}/${file}`))
|
||||
interruptedVhds.set(`${vdiDir}/${res[1]}`, {
|
||||
statePath: `${vdiDir}/${file}`,
|
||||
chain: mergeState.chain,
|
||||
})
|
||||
} catch (error) {
|
||||
// fall back to a non resuming merge
|
||||
vhds.add(`${vdiDir}/${file}`)
|
||||
logWarn('failed to read existing merge state', { path: file, error })
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -138,16 +116,21 @@ const listVhds = async (handler, vmDir) => {
|
||||
return { vhds, interruptedVhds, aliases }
|
||||
}
|
||||
|
||||
async function checkAliases(aliasPaths, targetDataRepository, { handler, onLog = noop, remove = false }) {
|
||||
async function checkAliases(
|
||||
aliasPaths,
|
||||
targetDataRepository,
|
||||
{ handler, logInfo = noop, logWarn = console.warn, remove = false }
|
||||
) {
|
||||
const aliasFound = []
|
||||
for (const path of aliasPaths) {
|
||||
const target = await resolveVhdAlias(handler, path)
|
||||
for (const alias of aliasPaths) {
|
||||
const target = await resolveVhdAlias(handler, alias)
|
||||
|
||||
if (!isVhdFile(target)) {
|
||||
onLog(`Alias ${path} references a non vhd target: ${target}`)
|
||||
logWarn('alias references non VHD target', { alias, target })
|
||||
if (remove) {
|
||||
logInfo('removing alias and non VHD target', { alias, target })
|
||||
await handler.unlink(target)
|
||||
await handler.unlink(path)
|
||||
await handler.unlink(alias)
|
||||
}
|
||||
continue
|
||||
}
|
||||
@@ -160,13 +143,13 @@ async function checkAliases(aliasPaths, targetDataRepository, { handler, onLog =
|
||||
// error during dispose should not trigger a deletion
|
||||
}
|
||||
} catch (error) {
|
||||
onLog(`target ${target} of alias ${path} is missing or broken`, { error })
|
||||
logWarn('missing or broken alias target', { alias, target, error })
|
||||
if (remove) {
|
||||
try {
|
||||
await VhdAbstract.unlink(handler, path)
|
||||
} catch (e) {
|
||||
if (e.code !== 'ENOENT') {
|
||||
onLog(`Error while deleting target ${target} of alias ${path}`, { error: e })
|
||||
await VhdAbstract.unlink(handler, alias)
|
||||
} catch (error) {
|
||||
if (error.code !== 'ENOENT') {
|
||||
logWarn('error deleting alias target', { alias, target, error })
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -176,37 +159,40 @@ async function checkAliases(aliasPaths, targetDataRepository, { handler, onLog =
|
||||
aliasFound.push(resolve('/', target))
|
||||
}
|
||||
|
||||
const entries = await handler.list(targetDataRepository, {
|
||||
const vhds = await handler.list(targetDataRepository, {
|
||||
ignoreMissing: true,
|
||||
prependDir: true,
|
||||
})
|
||||
|
||||
entries.forEach(async entry => {
|
||||
if (!aliasFound.includes(entry)) {
|
||||
onLog(`the Vhd ${entry} is not referenced by a an alias`)
|
||||
await asyncMap(vhds, async path => {
|
||||
if (!aliasFound.includes(path)) {
|
||||
logWarn('no alias references VHD', { path })
|
||||
if (remove) {
|
||||
await VhdAbstract.unlink(handler, entry)
|
||||
logInfo('deleting unused VHD', { path })
|
||||
await VhdAbstract.unlink(handler, path)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
exports.checkAliases = checkAliases
|
||||
|
||||
const defaultMergeLimiter = limitConcurrency(1)
|
||||
|
||||
exports.cleanVm = async function cleanVm(
|
||||
vmDir,
|
||||
{ fixMetadata, remove, merge, mergeLimiter = defaultMergeLimiter, onLog = noop }
|
||||
{ fixMetadata, remove, merge, mergeLimiter = defaultMergeLimiter, logInfo = noop, logWarn = console.warn }
|
||||
) {
|
||||
const limitedMergeVhdChain = mergeLimiter(mergeVhdChain)
|
||||
const limitedMergeVhdChain = mergeLimiter(_mergeVhdChain)
|
||||
|
||||
const handler = this._handler
|
||||
|
||||
const vhdsToJSons = new Set()
|
||||
const vhdById = new Map()
|
||||
const vhdParents = { __proto__: null }
|
||||
const vhdChildren = { __proto__: null }
|
||||
|
||||
const { vhds, interruptedVhds, aliases } = await listVhds(handler, vmDir)
|
||||
const { vhds, interruptedVhds, aliases } = await listVhds(handler, vmDir, logWarn)
|
||||
|
||||
// remove broken VHDs
|
||||
await asyncMap(vhds, async path => {
|
||||
@@ -224,12 +210,31 @@ exports.cleanVm = async function cleanVm(
|
||||
}
|
||||
vhdChildren[parent] = path
|
||||
}
|
||||
// Detect VHDs with the same UUIDs
|
||||
//
|
||||
// Due to a bug introduced in a1bcd35e2
|
||||
const duplicate = vhdById.get(UUID.stringify(vhd.footer.uuid))
|
||||
let vhdKept = vhd
|
||||
if (duplicate !== undefined) {
|
||||
logWarn('uuid is duplicated', { uuid: UUID.stringify(vhd.footer.uuid) })
|
||||
if (duplicate.containsAllDataOf(vhd)) {
|
||||
logWarn(`should delete ${path}`)
|
||||
vhdKept = duplicate
|
||||
vhds.delete(path)
|
||||
} else if (vhd.containsAllDataOf(duplicate)) {
|
||||
logWarn(`should delete ${duplicate._path}`)
|
||||
vhds.delete(duplicate._path)
|
||||
} else {
|
||||
logWarn('same ids but different content')
|
||||
}
|
||||
}
|
||||
vhdById.set(UUID.stringify(vhdKept.footer.uuid), vhdKept)
|
||||
})
|
||||
} catch (error) {
|
||||
vhds.delete(path)
|
||||
onLog(`error while checking the VHD with path ${path}`, { error })
|
||||
logWarn('VHD check error', { path, error })
|
||||
if (error?.code === 'ERR_ASSERTION' && remove) {
|
||||
onLog(`deleting broken ${path}`)
|
||||
logInfo('deleting broken VHD', { path })
|
||||
return VhdAbstract.unlink(handler, path)
|
||||
}
|
||||
}
|
||||
@@ -238,15 +243,15 @@ exports.cleanVm = async function cleanVm(
|
||||
// remove interrupted merge states for missing VHDs
|
||||
for (const interruptedVhd of interruptedVhds.keys()) {
|
||||
if (!vhds.has(interruptedVhd)) {
|
||||
const statePath = interruptedVhds.get(interruptedVhd)
|
||||
const { statePath } = interruptedVhds.get(interruptedVhd)
|
||||
interruptedVhds.delete(interruptedVhd)
|
||||
|
||||
onLog('orphan merge state', {
|
||||
logWarn('orphan merge state', {
|
||||
mergeStatePath: statePath,
|
||||
missingVhdPath: interruptedVhd,
|
||||
})
|
||||
if (remove) {
|
||||
onLog(`deleting orphan merge state ${statePath}`)
|
||||
logInfo('deleting orphan merge state', { statePath })
|
||||
await handler.unlink(statePath)
|
||||
}
|
||||
}
|
||||
@@ -255,7 +260,7 @@ exports.cleanVm = async function cleanVm(
|
||||
// check if alias are correct
|
||||
// check if all vhd in data subfolder have a corresponding alias
|
||||
await asyncMap(Object.keys(aliases), async dir => {
|
||||
await checkAliases(aliases[dir], `${dir}/data`, { handler, onLog, remove })
|
||||
await checkAliases(aliases[dir], `${dir}/data`, { handler, logInfo, logWarn, remove })
|
||||
})
|
||||
|
||||
// remove VHDs with missing ancestors
|
||||
@@ -277,9 +282,9 @@ exports.cleanVm = async function cleanVm(
|
||||
if (!vhds.has(parent)) {
|
||||
vhds.delete(vhdPath)
|
||||
|
||||
onLog(`the parent ${parent} of the VHD ${vhdPath} is missing`)
|
||||
logWarn('parent VHD is missing', { parent, child: vhdPath })
|
||||
if (remove) {
|
||||
onLog(`deleting orphan VHD ${vhdPath}`)
|
||||
logInfo('deleting orphan VHD', { path: vhdPath })
|
||||
deletions.push(VhdAbstract.unlink(handler, vhdPath))
|
||||
}
|
||||
}
|
||||
@@ -316,7 +321,7 @@ exports.cleanVm = async function cleanVm(
|
||||
// check is not good enough to delete the file, the best we can do is report
|
||||
// it
|
||||
if (!(await this.isValidXva(path))) {
|
||||
onLog(`the XVA with path ${path} is potentially broken`)
|
||||
logWarn('XVA might be broken', { path })
|
||||
}
|
||||
})
|
||||
|
||||
@@ -330,7 +335,7 @@ exports.cleanVm = async function cleanVm(
|
||||
try {
|
||||
metadata = JSON.parse(await handler.readFile(json))
|
||||
} catch (error) {
|
||||
onLog(`failed to read metadata file ${json}`, { error })
|
||||
logWarn('failed to read backup metadata', { path: json, error })
|
||||
jsons.delete(json)
|
||||
return
|
||||
}
|
||||
@@ -341,9 +346,9 @@ exports.cleanVm = async function cleanVm(
|
||||
if (xvas.has(linkedXva)) {
|
||||
unusedXvas.delete(linkedXva)
|
||||
} else {
|
||||
onLog(`the XVA linked to the metadata ${json} is missing`)
|
||||
logWarn('the XVA linked to the backup is missing', { backup: json, xva: linkedXva })
|
||||
if (remove) {
|
||||
onLog(`deleting incomplete backup ${json}`)
|
||||
logInfo('deleting incomplete backup', { path: json })
|
||||
jsons.delete(json)
|
||||
await handler.unlink(json)
|
||||
}
|
||||
@@ -364,9 +369,9 @@ exports.cleanVm = async function cleanVm(
|
||||
vhdsToJSons[path] = json
|
||||
})
|
||||
} else {
|
||||
onLog(`Some VHDs linked to the metadata ${json} are missing`, { missingVhds })
|
||||
logWarn('some VHDs linked to the backup are missing', { backup: json, missingVhds })
|
||||
if (remove) {
|
||||
onLog(`deleting incomplete backup ${json}`)
|
||||
logInfo('deleting incomplete backup', { path: json })
|
||||
jsons.delete(json)
|
||||
await handler.unlink(json)
|
||||
}
|
||||
@@ -378,7 +383,7 @@ exports.cleanVm = async function cleanVm(
|
||||
const unusedVhdsDeletion = []
|
||||
const toMerge = []
|
||||
{
|
||||
// VHD chains (as list from child to ancestor) to merge indexed by last
|
||||
// VHD chains (as list from oldest to most recent) to merge indexed by most recent
|
||||
// ancestor
|
||||
const vhdChainsToMerge = { __proto__: null }
|
||||
|
||||
@@ -402,14 +407,14 @@ exports.cleanVm = async function cleanVm(
|
||||
if (child !== undefined) {
|
||||
const chain = getUsedChildChainOrDelete(child)
|
||||
if (chain !== undefined) {
|
||||
chain.push(vhd)
|
||||
chain.unshift(vhd)
|
||||
return chain
|
||||
}
|
||||
}
|
||||
|
||||
onLog(`the VHD ${vhd} is unused`)
|
||||
logWarn('unused VHD', { path: vhd })
|
||||
if (remove) {
|
||||
onLog(`deleting unused VHD ${vhd}`)
|
||||
logInfo('deleting unused VHD', { path: vhd })
|
||||
unusedVhdsDeletion.push(VhdAbstract.unlink(handler, vhd))
|
||||
}
|
||||
}
|
||||
@@ -420,7 +425,13 @@ exports.cleanVm = async function cleanVm(
|
||||
|
||||
// merge interrupted VHDs
|
||||
for (const parent of interruptedVhds.keys()) {
|
||||
vhdChainsToMerge[parent] = [vhdChildren[parent], parent]
|
||||
// before #6349 the chain wasn't in the mergeState
|
||||
const { chain, statePath } = interruptedVhds.get(parent)
|
||||
if (chain === undefined) {
|
||||
vhdChainsToMerge[parent] = [parent, vhdChildren[parent]]
|
||||
} else {
|
||||
vhdChainsToMerge[parent] = chain.map(vhdPath => handlerPath.resolveFromFile(statePath, vhdPath))
|
||||
}
|
||||
}
|
||||
|
||||
Object.values(vhdChainsToMerge).forEach(chain => {
|
||||
@@ -433,9 +444,9 @@ exports.cleanVm = async function cleanVm(
|
||||
const metadataWithMergedVhd = {}
|
||||
const doMerge = async () => {
|
||||
await asyncMap(toMerge, async chain => {
|
||||
const merged = await limitedMergeVhdChain(chain, { handler, onLog, remove, merge })
|
||||
const merged = await limitedMergeVhdChain(handler, chain, { logInfo, logWarn, remove, merge })
|
||||
if (merged !== undefined) {
|
||||
const metadataPath = vhdsToJSons[chain[0]] // all the chain should have the same metada file
|
||||
const metadataPath = vhdsToJSons[chain[chain.length - 1]] // all the chain should have the same metada file
|
||||
metadataWithMergedVhd[metadataPath] = true
|
||||
}
|
||||
})
|
||||
@@ -445,18 +456,18 @@ exports.cleanVm = async function cleanVm(
|
||||
...unusedVhdsDeletion,
|
||||
toMerge.length !== 0 && (merge ? Task.run({ name: 'merge' }, doMerge) : doMerge()),
|
||||
asyncMap(unusedXvas, path => {
|
||||
onLog(`the XVA ${path} is unused`)
|
||||
logWarn('unused XVA', { path })
|
||||
if (remove) {
|
||||
onLog(`deleting unused XVA ${path}`)
|
||||
logInfo('deleting unused XVA', { path })
|
||||
return handler.unlink(path)
|
||||
}
|
||||
}),
|
||||
asyncMap(xvaSums, path => {
|
||||
// no need to handle checksums for XVAs deleted by the script, they will be handled by `unlink()`
|
||||
if (!xvas.has(path.slice(0, -'.checksum'.length))) {
|
||||
onLog(`the XVA checksum ${path} is unused`)
|
||||
logInfo('unused XVA checksum', { path })
|
||||
if (remove) {
|
||||
onLog(`deleting unused XVA checksum ${path}`)
|
||||
logInfo('deleting unused XVA checksum', { path })
|
||||
return handler.unlink(path)
|
||||
}
|
||||
}
|
||||
@@ -478,7 +489,11 @@ exports.cleanVm = async function cleanVm(
|
||||
if (mode === 'full') {
|
||||
// a full backup : check size
|
||||
const linkedXva = resolve('/', vmDir, xva)
|
||||
fileSystemSize = await handler.getSize(linkedXva)
|
||||
try {
|
||||
fileSystemSize = await handler.getSize(linkedXva)
|
||||
} catch (error) {
|
||||
// can fail with encrypted remote
|
||||
}
|
||||
} else if (mode === 'delta') {
|
||||
const linkedVhds = Object.keys(vhds).map(key => resolve('/', vmDir, vhds[key]))
|
||||
fileSystemSize = await computeVhdsSize(handler, linkedVhds)
|
||||
@@ -490,11 +505,15 @@ exports.cleanVm = async function cleanVm(
|
||||
|
||||
// don't warn if the size has changed after a merge
|
||||
if (!merged && fileSystemSize !== size) {
|
||||
onLog(`incorrect size in metadata: ${size ?? 'none'} instead of ${fileSystemSize}`)
|
||||
logWarn('incorrect backup size in metadata', {
|
||||
path: metadataPath,
|
||||
actual: size ?? 'none',
|
||||
expected: fileSystemSize,
|
||||
})
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
onLog(`failed to get size of ${metadataPath}`, { error })
|
||||
logWarn('failed to get backup size', { backup: metadataPath, error })
|
||||
return
|
||||
}
|
||||
|
||||
@@ -504,7 +523,7 @@ exports.cleanVm = async function cleanVm(
|
||||
try {
|
||||
await handler.writeFile(metadataPath, JSON.stringify(metadata), { flags: 'w' })
|
||||
} catch (error) {
|
||||
onLog(`failed to update size in backup metadata ${metadataPath} after merge`, { error })
|
||||
logWarn('failed to update backup size in metadata', { path: metadataPath, error })
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
@@ -65,17 +65,6 @@ exports.exportDeltaVm = async function exportDeltaVm(
|
||||
return
|
||||
}
|
||||
|
||||
// If the VDI name start with `[NOBAK]`, do not export it.
|
||||
if (vdi.name_label.startsWith('[NOBAK]')) {
|
||||
// FIXME: find a way to not create the VDI snapshot in the
|
||||
// first time.
|
||||
//
|
||||
// The snapshot must not exist otherwise it could break the
|
||||
// next export.
|
||||
ignoreErrors.call(vdi.$destroy())
|
||||
return
|
||||
}
|
||||
|
||||
vbds[vbd.$ref] = vbd
|
||||
|
||||
const vdiRef = vdi.$ref
|
||||
|
||||
@@ -3,6 +3,8 @@
|
||||
const eos = require('end-of-stream')
|
||||
const { PassThrough } = require('stream')
|
||||
|
||||
const { debug } = require('@xen-orchestra/log').createLogger('xo:backups:forkStreamUnpipe')
|
||||
|
||||
// create a new readable stream from an existing one which may be piped later
|
||||
//
|
||||
// in case of error in the new readable stream, it will simply be unpiped
|
||||
@@ -11,18 +13,23 @@ exports.forkStreamUnpipe = function forkStreamUnpipe(stream) {
|
||||
const { forks = 0 } = stream
|
||||
stream.forks = forks + 1
|
||||
|
||||
debug('forking', { forks: stream.forks })
|
||||
|
||||
const proxy = new PassThrough()
|
||||
stream.pipe(proxy)
|
||||
eos(stream, error => {
|
||||
if (error !== undefined) {
|
||||
debug('error on original stream, destroying fork', { error })
|
||||
proxy.destroy(error)
|
||||
}
|
||||
})
|
||||
eos(proxy, _ => {
|
||||
stream.forks--
|
||||
eos(proxy, error => {
|
||||
debug('end of stream, unpiping', { error, forks: --stream.forks })
|
||||
|
||||
stream.unpipe(proxy)
|
||||
|
||||
if (stream.forks === 0) {
|
||||
debug('no more forks, destroying original stream')
|
||||
stream.destroy(new Error('no more consumers for this stream'))
|
||||
}
|
||||
})
|
||||
|
||||
@@ -49,6 +49,11 @@ const isValidTar = async (handler, size, fd) => {
|
||||
// TODO: find an heuristic for compressed files
|
||||
async function isValidXva(path) {
|
||||
const handler = this._handler
|
||||
|
||||
// size is longer when encrypted + reading part of an encrypted file is not implemented
|
||||
if (handler.isEncrypted) {
|
||||
return true
|
||||
}
|
||||
try {
|
||||
const fd = await handler.openFile(path, 'r')
|
||||
try {
|
||||
@@ -66,7 +71,6 @@ async function isValidXva(path) {
|
||||
}
|
||||
} catch (error) {
|
||||
// never throw, log and report as valid to avoid side effects
|
||||
console.error('isValidXva', path, error)
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,9 +6,16 @@
|
||||
- [Task logs](#task-logs)
|
||||
- [During backup](#during-backup)
|
||||
- [During restoration](#during-restoration)
|
||||
- [API](#api)
|
||||
- [Run description object](#run-description-object)
|
||||
- [`IdPattern`](#idpattern)
|
||||
- [Settings](#settings)
|
||||
- [Writer API](#writer-api)
|
||||
|
||||
## File structure on remote
|
||||
|
||||
### with vhd files
|
||||
|
||||
```
|
||||
<remote>
|
||||
└─ xo-vm-backups
|
||||
@@ -25,6 +32,19 @@
|
||||
└─ <YYYYMMDD>T<HHmmss>.xva.checksum
|
||||
```
|
||||
|
||||
### with vhd directories
|
||||
|
||||
When `useVhdDirectory` is enabled on the remote, the directory containing the VHDs has a slightly different architecture:
|
||||
|
||||
```
|
||||
<vdis>/<job UUID>/<VDI UUID>
|
||||
├─ <YYYYMMDD>T<HHmmss>.alias.vhd // contains the relative path to a VHD directory
|
||||
├─ <YYYYMMDD>T<HHmmss>.alias.vhd
|
||||
└─ data
|
||||
├─ <uuid>.vhd // VHD directory format is described in vhd-lib/Vhd/VhdDirectory.js
|
||||
└─ <uuid>.vhd
|
||||
```
|
||||
|
||||
## Attributes
|
||||
|
||||
### Of created snapshots
|
||||
@@ -64,24 +84,30 @@ job.start(data: { mode: Mode, reportWhen: ReportWhen })
|
||||
├─ task.warning(message: string)
|
||||
├─ task.start(data: { type: 'VM', id: string })
|
||||
│ ├─ task.warning(message: string)
|
||||
| ├─ task.start(message: 'clean-vm')
|
||||
│ │ └─ task.end
|
||||
│ ├─ task.start(message: 'snapshot')
|
||||
│ │ └─ task.end
|
||||
│ ├─ task.start(message: 'export', data: { type: 'SR' | 'remote', id: string })
|
||||
│ ├─ task.start(message: 'export', data: { type: 'SR' | 'remote', id: string, isFull: boolean })
|
||||
│ │ ├─ task.warning(message: string)
|
||||
│ │ ├─ task.start(message: 'transfer')
|
||||
│ │ │ ├─ task.warning(message: string)
|
||||
│ │ │ └─ task.end(result: { size: number })
|
||||
│ │ │
|
||||
│ │ │ // in case there is a healthcheck scheduled for this vm in this job
|
||||
│ │ ├─ task.start(message: 'health check')
|
||||
│ │ │ ├─ task.start(message: 'transfer')
|
||||
│ │ │ │ └─ task.end(result: { size: number })
|
||||
│ │ │ ├─ task.start(message: 'vmstart')
|
||||
│ │ │ │ └─ task.end
|
||||
│ │ │ └─ task.end
|
||||
│ │ │
|
||||
│ │ │ // in case of full backup, DR and CR
|
||||
│ │ ├─ task.start(message: 'clean')
|
||||
│ │ │ ├─ task.warning(message: string)
|
||||
│ │ │ └─ task.end
|
||||
│ │ │
|
||||
│ │ │ // in case of delta backup
|
||||
│ │ ├─ task.start(message: 'merge')
|
||||
│ │ │ ├─ task.warning(message: string)
|
||||
│ │ │ └─ task.end(result: { size: number })
|
||||
│ │ │
|
||||
│ │ └─ task.end
|
||||
| ├─ task.start(message: 'clean-vm')
|
||||
│ │ └─ task.end
|
||||
│ └─ task.end
|
||||
└─ job.end
|
||||
@@ -95,3 +121,102 @@ task.start(message: 'restore', data: { jobId: string, srId: string, time: number
|
||||
│ └─ task.end(result: { id: string, size: number })
|
||||
└─ task.end
|
||||
```
|
||||
|
||||
## API
|
||||
|
||||
### Run description object
|
||||
|
||||
This is a JavaScript object containing all the information necessary to run a backup job.
|
||||
|
||||
```coffee
|
||||
# Information about the job itself
|
||||
job:
|
||||
|
||||
# Unique identifier
|
||||
id: string
|
||||
|
||||
# Human readable identifier
|
||||
name: string
|
||||
|
||||
# Whether this job is doing Full Backup / Disaster Recovery or
|
||||
# Delta Backup / Continuous Replication
|
||||
mode: 'full' | 'delta'
|
||||
|
||||
# For backup jobs, indicates which remotes to use
|
||||
remotes: IdPattern
|
||||
|
||||
settings:
|
||||
|
||||
# Used for the whole job
|
||||
'': Settings
|
||||
|
||||
# Used for a specific schedule
|
||||
[ScheduleId]: Settings
|
||||
|
||||
# Used for a specific VM
|
||||
[VmId]: Settings
|
||||
|
||||
# For replication jobs, indicates which SRs to use
|
||||
srs: IdPattern
|
||||
|
||||
# Here for historical reasons
|
||||
type: 'backup'
|
||||
|
||||
# Indicates which VMs to backup/replicate
|
||||
vms: IdPattern
|
||||
|
||||
# Indicates which XAPI to use to connect to a specific VM or SR
|
||||
recordToXapi:
|
||||
[ObjectId]: XapiId
|
||||
|
||||
# Information necessary to connect to each remote
|
||||
remotes:
|
||||
[RemoteId]:
|
||||
url: string
|
||||
|
||||
# Indicates which schedule is used for this run
|
||||
schedule:
|
||||
id: ScheduleId
|
||||
|
||||
# Information necessary to connect to each XAPI
|
||||
xapis:
|
||||
[XapiId]:
|
||||
allowUnauthorized: boolean
|
||||
credentials:
|
||||
password: string
|
||||
username: string
|
||||
url: string
|
||||
```
|
||||
|
||||
### `IdPattern`
|
||||
|
||||
For a single object:
|
||||
|
||||
```
|
||||
{ id: string }
|
||||
```
|
||||
|
||||
For multiple objects:
|
||||
|
||||
```
|
||||
{ id: { __or: string[] } }
|
||||
```
|
||||
|
||||
> This syntax is compatible with [`value-matcher`](https://github.com/vatesfr/xen-orchestra/tree/master/packages/value-matcher).
|
||||
|
||||
### Settings
|
||||
|
||||
Settings are described in [`@xen-orchestra/backups/Backup.js](https://github.com/vatesfr/xen-orchestra/blob/master/%40xen-orchestra/backups/Backup.js).
|
||||
|
||||
## Writer API
|
||||
|
||||
- `beforeBackup()`
|
||||
- **Delta**
|
||||
- `checkBaseVdis(baseUuidToSrcVdi, baseVm)`
|
||||
- `prepare({ isFull })`
|
||||
- `transfer({ timestamp, deltaExport, sizeContainers })`
|
||||
- `cleanup()`
|
||||
- `healthCheck(sr)`
|
||||
- **Full**
|
||||
- `run({ timestamp, sizeContainer, stream })`
|
||||
- `afterBackup()`
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
#!/usr/bin/env node
|
||||
// eslint-disable-next-line eslint-comments/disable-enable-pair
|
||||
/* eslint-disable n/shebang */
|
||||
|
||||
'use strict'
|
||||
|
||||
@@ -62,7 +64,7 @@ const main = Disposable.wrap(async function* main(args) {
|
||||
try {
|
||||
const vmDir = getVmBackupDir(String(await handler.readFile(taskFile)))
|
||||
try {
|
||||
await adapter.cleanVm(vmDir, { merge: true, onLog: info, remove: true })
|
||||
await adapter.cleanVm(vmDir, { merge: true, logInfo: info, logWarn: warn, remove: true })
|
||||
} catch (error) {
|
||||
// consider the clean successful if the VM dir is missing
|
||||
if (error.code !== 'ENOENT') {
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"version": "0.21.0",
|
||||
"version": "0.27.4",
|
||||
"engines": {
|
||||
"node": ">=14.6"
|
||||
},
|
||||
@@ -16,16 +16,18 @@
|
||||
"postversion": "npm publish --access public"
|
||||
},
|
||||
"dependencies": {
|
||||
"@vates/cached-dns.lookup": "^1.0.0",
|
||||
"@vates/compose": "^2.1.0",
|
||||
"@vates/decorate-with": "^2.0.0",
|
||||
"@vates/disposable": "^0.1.1",
|
||||
"@vates/parse-duration": "^0.1.1",
|
||||
"@xen-orchestra/async-map": "^0.1.2",
|
||||
"@xen-orchestra/fs": "^1.0.0",
|
||||
"@xen-orchestra/fs": "^3.0.0",
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"@xen-orchestra/template": "^0.1.0",
|
||||
"compare-versions": "^4.0.1",
|
||||
"d3-time-format": "^3.0.0",
|
||||
"decorator-synchronized": "^0.6.0",
|
||||
"end-of-stream": "^1.4.4",
|
||||
"fs-extra": "^10.0.0",
|
||||
"golike-defer": "^0.5.1",
|
||||
@@ -36,7 +38,7 @@
|
||||
"promise-toolbox": "^0.21.0",
|
||||
"proper-lockfile": "^4.1.2",
|
||||
"uuid": "^8.3.2",
|
||||
"vhd-lib": "^3.1.0",
|
||||
"vhd-lib": "^4.0.0",
|
||||
"yazl": "^2.5.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
@@ -44,7 +46,7 @@
|
||||
"tmp": "^0.2.1"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@xen-orchestra/xapi": "^0.10.0"
|
||||
"@xen-orchestra/xapi": "^1.4.2"
|
||||
},
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"author": {
|
||||
|
||||
@@ -19,6 +19,8 @@ const { AbstractDeltaWriter } = require('./_AbstractDeltaWriter.js')
|
||||
const { checkVhd } = require('./_checkVhd.js')
|
||||
const { packUuid } = require('./_packUuid.js')
|
||||
const { Disposable } = require('promise-toolbox')
|
||||
const { HealthCheckVmBackup } = require('../HealthCheckVmBackup.js')
|
||||
const { ImportVmBackup } = require('../ImportVmBackup.js')
|
||||
|
||||
const { warn } = createLogger('xo:backups:DeltaBackupWriter')
|
||||
|
||||
@@ -69,6 +71,35 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
|
||||
return this._cleanVm({ merge: true })
|
||||
}
|
||||
|
||||
healthCheck(sr) {
|
||||
return Task.run(
|
||||
{
|
||||
name: 'health check',
|
||||
},
|
||||
async () => {
|
||||
const xapi = sr.$xapi
|
||||
const srUuid = sr.uuid
|
||||
const adapter = this._adapter
|
||||
const metadata = await adapter.readVmBackupMetadata(this._metadataFileName)
|
||||
const { id: restoredId } = await new ImportVmBackup({
|
||||
adapter,
|
||||
metadata,
|
||||
srUuid,
|
||||
xapi,
|
||||
}).run()
|
||||
const restoredVm = xapi.getObject(restoredId)
|
||||
try {
|
||||
await new HealthCheckVmBackup({
|
||||
restoredVm,
|
||||
xapi,
|
||||
}).run()
|
||||
} finally {
|
||||
await xapi.VM_destroy(restoredVm.$ref)
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
prepare({ isFull }) {
|
||||
// create the task related to this export and ensure all methods are called in this context
|
||||
const task = new Task({
|
||||
@@ -80,7 +111,9 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
|
||||
},
|
||||
})
|
||||
this.transfer = task.wrapFn(this.transfer)
|
||||
this.cleanup = task.wrapFn(this.cleanup, true)
|
||||
this.healthCheck = task.wrapFn(this.healthCheck)
|
||||
this.cleanup = task.wrapFn(this.cleanup)
|
||||
this.afterBackup = task.wrapFn(this.afterBackup, true)
|
||||
|
||||
return task.run(() => this._prepare())
|
||||
}
|
||||
@@ -156,7 +189,7 @@ exports.DeltaBackupWriter = class DeltaBackupWriter extends MixinBackupWriter(Ab
|
||||
}/${adapter.getVhdFileName(basename)}`
|
||||
)
|
||||
|
||||
const metadataFilename = `${backupDir}/${basename}.json`
|
||||
const metadataFilename = (this._metadataFileName = `${backupDir}/${basename}.json`)
|
||||
const metadataContent = {
|
||||
jobId,
|
||||
mode: job.mode,
|
||||
|
||||
@@ -9,4 +9,6 @@ exports.AbstractWriter = class AbstractWriter {
|
||||
beforeBackup() {}
|
||||
|
||||
afterBackup() {}
|
||||
|
||||
healthCheck(sr) {}
|
||||
}
|
||||
|
||||
@@ -6,8 +6,9 @@ const { join } = require('path')
|
||||
const { getVmBackupDir } = require('../_getVmBackupDir.js')
|
||||
const MergeWorker = require('../merge-worker/index.js')
|
||||
const { formatFilenameDate } = require('../_filenameDate.js')
|
||||
const { Task } = require('../Task.js')
|
||||
|
||||
const { warn } = createLogger('xo:backups:MixinBackupWriter')
|
||||
const { info, warn } = createLogger('xo:backups:MixinBackupWriter')
|
||||
|
||||
exports.MixinBackupWriter = (BaseClass = Object) =>
|
||||
class MixinBackupWriter extends BaseClass {
|
||||
@@ -25,11 +26,17 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
|
||||
|
||||
async _cleanVm(options) {
|
||||
try {
|
||||
return await this._adapter.cleanVm(this.#vmBackupDir, {
|
||||
...options,
|
||||
fixMetadata: true,
|
||||
onLog: warn,
|
||||
lock: false,
|
||||
return await Task.run({ name: 'clean-vm' }, () => {
|
||||
return this._adapter.cleanVm(this.#vmBackupDir, {
|
||||
...options,
|
||||
fixMetadata: true,
|
||||
logInfo: info,
|
||||
logWarn: (message, data) => {
|
||||
warn(message, data)
|
||||
Task.warning(message, data)
|
||||
},
|
||||
lock: false,
|
||||
})
|
||||
})
|
||||
} catch (error) {
|
||||
warn(error)
|
||||
@@ -64,5 +71,6 @@ exports.MixinBackupWriter = (BaseClass = Object) =>
|
||||
const remotePath = handler._getRealPath()
|
||||
await MergeWorker.run(remotePath)
|
||||
}
|
||||
await this._adapter.invalidateVmBackupListCache(this._backup.vm.uuid)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
"preferGlobal": true,
|
||||
"dependencies": {
|
||||
"golike-defer": "^0.5.1",
|
||||
"xen-api": "^1.1.0"
|
||||
"xen-api": "^1.2.2"
|
||||
},
|
||||
"scripts": {
|
||||
"postversion": "npm publish"
|
||||
|
||||
@@ -22,7 +22,7 @@ await ee.emitAsync('start')
|
||||
// error handling though:
|
||||
await ee.emitAsync(
|
||||
{
|
||||
onError(error) {
|
||||
onError(error, event, listener) {
|
||||
console.warn(error)
|
||||
},
|
||||
},
|
||||
|
||||
@@ -40,7 +40,7 @@ await ee.emitAsync('start')
|
||||
// error handling though:
|
||||
await ee.emitAsync(
|
||||
{
|
||||
onError(error) {
|
||||
onError(error, event, listener) {
|
||||
console.warn(error)
|
||||
},
|
||||
},
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
'use strict'
|
||||
|
||||
const identity = v => v
|
||||
|
||||
module.exports = function emitAsync(event) {
|
||||
let opts
|
||||
let i = 1
|
||||
@@ -17,12 +19,18 @@ module.exports = function emitAsync(event) {
|
||||
}
|
||||
|
||||
const onError = opts != null && opts.onError
|
||||
const addErrorHandler = onError
|
||||
? (promise, listener) => promise.catch(error => onError(error, event, listener))
|
||||
: identity
|
||||
|
||||
return Promise.all(
|
||||
this.listeners(event).map(listener =>
|
||||
new Promise(resolve => {
|
||||
resolve(listener.apply(this, args))
|
||||
}).catch(onError)
|
||||
addErrorHandler(
|
||||
new Promise(resolve => {
|
||||
resolve(listener.apply(this, args))
|
||||
}),
|
||||
listener
|
||||
)
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"private": false,
|
||||
"name": "@xen-orchestra/emit-async",
|
||||
"version": "0.1.0",
|
||||
"version": "1.0.0",
|
||||
"license": "ISC",
|
||||
"description": "Emit an event for async listeners to settle",
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/emit-async",
|
||||
|
||||
19
@xen-orchestra/fs/docs/encryption.md
Normal file
19
@xen-orchestra/fs/docs/encryption.md
Normal file
@@ -0,0 +1,19 @@
|
||||
## metadata files
|
||||
|
||||
- Older remotes dont have any metadata file
|
||||
- Remote used since 5.75 have two files : encryption.json and metadata.json
|
||||
|
||||
The metadata files are checked by the sync() method. If the check fails it MUST throw an error and dismount.
|
||||
|
||||
If the remote is empty, the `sync` method creates them
|
||||
|
||||
### encryption.json
|
||||
|
||||
A non encrypted file contain the algorithm and parameters used for this remote.
|
||||
This MUST NOT contains the key.
|
||||
|
||||
### metadata.json
|
||||
|
||||
An encrypted JSON file containing the settings of a remote. Today this is an empty JSON file ( `{random: <randomuuid>}` ), it serves to check if the encryption key set in the remote is valid, but in the future will be able to store some remote settings to ease disaster recovery.
|
||||
|
||||
If this file can't be read (decrypted, decompressed, .. ), that means that the remote settings have been updated. If the remote is empty, update the `encryption.json` and `metadata.json` files , else raise an error.
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"private": false,
|
||||
"name": "@xen-orchestra/fs",
|
||||
"version": "1.0.0",
|
||||
"version": "3.0.0",
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"description": "The File System for Xen Orchestra backups.",
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/fs",
|
||||
@@ -17,19 +17,20 @@
|
||||
"xo-fs": "./cli.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
"node": ">=14.13"
|
||||
},
|
||||
"dependencies": {
|
||||
"@marsaud/smb2": "^0.18.0",
|
||||
"@sindresorhus/df": "^3.1.1",
|
||||
"@vates/async-each": "^0.1.0",
|
||||
"@vates/coalesce-calls": "^0.1.0",
|
||||
"@vates/decorate-with": "^2.0.0",
|
||||
"@xen-orchestra/async-map": "^0.1.2",
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"@aws-sdk/client-s3": "^3.54.0",
|
||||
"@aws-sdk/lib-storage": "^3.54.0",
|
||||
"@aws-sdk/middleware-apply-body-checksum": "^3.58.0",
|
||||
"@aws-sdk/node-http-handler": "^3.54.0",
|
||||
"@sindresorhus/df": "^3.1.1",
|
||||
"@vates/async-each": "^1.0.0",
|
||||
"@vates/coalesce-calls": "^0.1.0",
|
||||
"@vates/decorate-with": "^2.0.0",
|
||||
"@vates/read-chunk": "^1.0.0",
|
||||
"@xen-orchestra/async-map": "^0.1.2",
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"bind-property-descriptor": "^2.0.0",
|
||||
"decorator-synchronized": "^0.6.0",
|
||||
"execa": "^5.0.0",
|
||||
@@ -39,9 +40,10 @@
|
||||
"lodash": "^4.17.4",
|
||||
"promise-toolbox": "^0.21.0",
|
||||
"proper-lockfile": "^4.1.2",
|
||||
"readable-stream": "^3.0.6",
|
||||
"pumpify": "^2.0.1",
|
||||
"readable-stream": "^4.1.0",
|
||||
"through2": "^4.0.2",
|
||||
"xo-remote-parser": "^0.8.0"
|
||||
"xo-remote-parser": "^0.9.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/cli": "^7.0.0",
|
||||
@@ -49,10 +51,9 @@
|
||||
"@babel/plugin-proposal-decorators": "^7.1.6",
|
||||
"@babel/plugin-proposal-function-bind": "^7.0.0",
|
||||
"@babel/preset-env": "^7.8.0",
|
||||
"async-iterator-to-stream": "^1.1.0",
|
||||
"babel-plugin-lodash": "^3.3.2",
|
||||
"cross-env": "^7.0.2",
|
||||
"dotenv": "^15.0.0",
|
||||
"dotenv": "^16.0.0",
|
||||
"rimraf": "^3.0.0"
|
||||
},
|
||||
"scripts": {
|
||||
@@ -67,5 +68,9 @@
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"exports": {
|
||||
".": "./dist/index.js",
|
||||
"./path": "./dist/path.js"
|
||||
}
|
||||
}
|
||||
|
||||
71
@xen-orchestra/fs/src/_encryptor.js
Normal file
71
@xen-orchestra/fs/src/_encryptor.js
Normal file
@@ -0,0 +1,71 @@
|
||||
const { readChunk } = require('@vates/read-chunk')
|
||||
const crypto = require('crypto')
|
||||
const pumpify = require('pumpify')
|
||||
|
||||
function getEncryptor(key) {
|
||||
if (key === undefined) {
|
||||
return {
|
||||
id: 'NULL_ENCRYPTOR',
|
||||
algorithm: 'none',
|
||||
key: 'none',
|
||||
ivLength: 0,
|
||||
encryptData: buffer => buffer,
|
||||
encryptStream: stream => stream,
|
||||
decryptData: buffer => buffer,
|
||||
decryptStream: stream => stream,
|
||||
}
|
||||
}
|
||||
const algorithm = 'aes-256-cbc'
|
||||
const ivLength = 16
|
||||
|
||||
function encryptStream(input) {
|
||||
const iv = crypto.randomBytes(ivLength)
|
||||
const cipher = crypto.createCipheriv(algorithm, Buffer.from(key), iv)
|
||||
|
||||
const encrypted = pumpify(input, cipher)
|
||||
encrypted.unshift(iv)
|
||||
return encrypted
|
||||
}
|
||||
|
||||
async function decryptStream(encryptedStream) {
|
||||
const iv = await readChunk(encryptedStream, ivLength)
|
||||
const cipher = crypto.createDecipheriv(algorithm, Buffer.from(key), iv)
|
||||
/**
|
||||
* WARNING
|
||||
*
|
||||
* the crytped size has an initializtion vector + a padding at the end
|
||||
* whe can't predict the decrypted size from the start of the encrypted size
|
||||
* thus, we can't set decrypted.length reliably
|
||||
*
|
||||
*/
|
||||
return pumpify(encryptedStream, cipher)
|
||||
}
|
||||
|
||||
function encryptData(buffer) {
|
||||
const iv = crypto.randomBytes(ivLength)
|
||||
const cipher = crypto.createCipheriv(algorithm, Buffer.from(key), iv)
|
||||
const encrypted = cipher.update(buffer)
|
||||
return Buffer.concat([iv, encrypted, cipher.final()])
|
||||
}
|
||||
|
||||
function decryptData(buffer) {
|
||||
const iv = buffer.slice(0, ivLength)
|
||||
const encrypted = buffer.slice(ivLength)
|
||||
const decipher = crypto.createDecipheriv(algorithm, Buffer.from(key), iv)
|
||||
const decrypted = decipher.update(encrypted)
|
||||
return Buffer.concat([decrypted, decipher.final()])
|
||||
}
|
||||
|
||||
return {
|
||||
id: algorithm,
|
||||
algorithm,
|
||||
key,
|
||||
ivLength,
|
||||
encryptData,
|
||||
encryptStream,
|
||||
decryptData,
|
||||
decryptStream,
|
||||
}
|
||||
}
|
||||
|
||||
exports._getEncryptor = getEncryptor
|
||||
@@ -1,15 +1,20 @@
|
||||
import asyncMapSettled from '@xen-orchestra/async-map/legacy'
|
||||
import assert from 'assert'
|
||||
import getStream from 'get-stream'
|
||||
import { coalesceCalls } from '@vates/coalesce-calls'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { fromCallback, fromEvent, ignoreErrors, timeout } from 'promise-toolbox'
|
||||
import { limitConcurrency } from 'limit-concurrency-decorator'
|
||||
import { parse } from 'xo-remote-parser'
|
||||
import { pipeline } from 'stream'
|
||||
import { randomBytes } from 'crypto'
|
||||
import { randomBytes, randomUUID } from 'crypto'
|
||||
import { synchronized } from 'decorator-synchronized'
|
||||
|
||||
import { basename, dirname, normalize as normalizePath } from './_path'
|
||||
import { basename, dirname, normalize as normalizePath } from './path'
|
||||
import { createChecksumStream, validChecksumOfReadStream } from './checksum'
|
||||
import { _getEncryptor } from './_encryptor'
|
||||
|
||||
const { info, warn } = createLogger('@xen-orchestra:fs')
|
||||
|
||||
const checksumFile = file => file + '.checksum'
|
||||
const computeRate = (hrtime, size) => {
|
||||
@@ -20,6 +25,9 @@ const computeRate = (hrtime, size) => {
|
||||
const DEFAULT_TIMEOUT = 6e5 // 10 min
|
||||
const DEFAULT_MAX_PARALLEL_OPERATIONS = 10
|
||||
|
||||
const ENCRYPTION_DESC_FILENAME = 'encryption.json'
|
||||
const ENCRYPTION_METADATA_FILENAME = 'metadata.json'
|
||||
|
||||
const ignoreEnoent = error => {
|
||||
if (error == null || error.code !== 'ENOENT') {
|
||||
throw error
|
||||
@@ -60,6 +68,7 @@ class PrefixWrapper {
|
||||
}
|
||||
|
||||
export default class RemoteHandlerAbstract {
|
||||
_encryptor
|
||||
constructor(remote, options = {}) {
|
||||
if (remote.url === 'test://') {
|
||||
this._remote = remote
|
||||
@@ -70,6 +79,7 @@ export default class RemoteHandlerAbstract {
|
||||
}
|
||||
}
|
||||
;({ highWaterMark: this._highWaterMark, timeout: this._timeout = DEFAULT_TIMEOUT } = options)
|
||||
this._encryptor = _getEncryptor(this._remote.encryptionKey)
|
||||
|
||||
const sharedLimit = limitConcurrency(options.maxParallelOperations ?? DEFAULT_MAX_PARALLEL_OPERATIONS)
|
||||
this.closeFile = sharedLimit(this.closeFile)
|
||||
@@ -108,90 +118,51 @@ export default class RemoteHandlerAbstract {
|
||||
await this.__closeFile(fd)
|
||||
}
|
||||
|
||||
// TODO: remove method
|
||||
async createOutputStream(file, { checksum = false, dirMode, ...options } = {}) {
|
||||
async createReadStream(file, { checksum = false, ignoreMissingChecksum = false, ...options } = {}) {
|
||||
if (options.end !== undefined || options.start !== undefined) {
|
||||
assert.strictEqual(this.isEncrypted, false, `Can't read part of a file when encryption is active ${file}`)
|
||||
}
|
||||
if (typeof file === 'string') {
|
||||
file = normalizePath(file)
|
||||
}
|
||||
const path = typeof file === 'string' ? file : file.path
|
||||
const streamP = timeout.call(
|
||||
this._createOutputStream(file, {
|
||||
dirMode,
|
||||
flags: 'wx',
|
||||
...options,
|
||||
}),
|
||||
|
||||
let stream = await timeout.call(
|
||||
this._createReadStream(file, { ...options, highWaterMark: this._highWaterMark }),
|
||||
this._timeout
|
||||
)
|
||||
|
||||
if (!checksum) {
|
||||
return streamP
|
||||
}
|
||||
// detect early errors
|
||||
await fromEvent(stream, 'readable')
|
||||
|
||||
const checksumStream = createChecksumStream()
|
||||
const forwardError = error => {
|
||||
checksumStream.emit('error', error)
|
||||
}
|
||||
if (checksum) {
|
||||
try {
|
||||
const path = typeof file === 'string' ? file : file.path
|
||||
const checksum = await this._readFile(checksumFile(path), { flags: 'r' })
|
||||
|
||||
const stream = await streamP
|
||||
stream.on('error', forwardError)
|
||||
checksumStream.pipe(stream)
|
||||
|
||||
checksumStream.checksumWritten = checksumStream.checksum
|
||||
.then(value => this._outputFile(checksumFile(path), value, { flags: 'wx' }))
|
||||
.catch(forwardError)
|
||||
|
||||
return checksumStream
|
||||
}
|
||||
|
||||
createReadStream(file, { checksum = false, ignoreMissingChecksum = false, ...options } = {}) {
|
||||
if (typeof file === 'string') {
|
||||
file = normalizePath(file)
|
||||
}
|
||||
const path = typeof file === 'string' ? file : file.path
|
||||
const streamP = timeout
|
||||
.call(this._createReadStream(file, { ...options, highWaterMark: this._highWaterMark }), this._timeout)
|
||||
.then(stream => {
|
||||
// detect early errors
|
||||
let promise = fromEvent(stream, 'readable')
|
||||
|
||||
// try to add the length prop if missing and not a range stream
|
||||
if (stream.length === undefined && options.end === undefined && options.start === undefined) {
|
||||
promise = Promise.all([
|
||||
promise,
|
||||
ignoreErrors.call(
|
||||
this._getSize(file).then(size => {
|
||||
stream.length = size
|
||||
})
|
||||
),
|
||||
])
|
||||
const { length } = stream
|
||||
stream = validChecksumOfReadStream(stream, String(checksum).trim())
|
||||
stream.length = length
|
||||
} catch (error) {
|
||||
if (!(ignoreMissingChecksum && error.code === 'ENOENT')) {
|
||||
throw error
|
||||
}
|
||||
|
||||
return promise.then(() => stream)
|
||||
})
|
||||
|
||||
if (!checksum) {
|
||||
return streamP
|
||||
}
|
||||
|
||||
// avoid a unhandled rejection warning
|
||||
ignoreErrors.call(streamP)
|
||||
|
||||
return this._readFile(checksumFile(path), { flags: 'r' }).then(
|
||||
checksum =>
|
||||
streamP.then(stream => {
|
||||
const { length } = stream
|
||||
stream = validChecksumOfReadStream(stream, String(checksum).trim())
|
||||
stream.length = length
|
||||
|
||||
return stream
|
||||
}),
|
||||
error => {
|
||||
if (ignoreMissingChecksum && error && error.code === 'ENOENT') {
|
||||
return streamP
|
||||
}
|
||||
throw error
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
if (this.isEncrypted) {
|
||||
stream = this._encryptor.decryptStream(stream)
|
||||
} else {
|
||||
// try to add the length prop if missing and not a range stream
|
||||
if (stream.length === undefined && options.end === undefined && options.start === undefined) {
|
||||
try {
|
||||
stream.length = await this._getSize(file)
|
||||
} catch (error) {
|
||||
// ignore errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return stream
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -207,6 +178,8 @@ export default class RemoteHandlerAbstract {
|
||||
async outputStream(path, input, { checksum = true, dirMode, validator } = {}) {
|
||||
path = normalizePath(path)
|
||||
let checksumStream
|
||||
|
||||
input = this._encryptor.encryptStream(input)
|
||||
if (checksum) {
|
||||
checksumStream = createChecksumStream()
|
||||
pipeline(input, checksumStream, noop)
|
||||
@@ -217,6 +190,8 @@ export default class RemoteHandlerAbstract {
|
||||
validator,
|
||||
})
|
||||
if (checksum) {
|
||||
// using _outpuFile means the checksum will NOT be encrypted
|
||||
// it is by design to allow checking of encrypted files without the key
|
||||
await this._outputFile(checksumFile(path), await checksumStream.checksum, { dirMode, flags: 'wx' })
|
||||
}
|
||||
}
|
||||
@@ -236,8 +211,13 @@ export default class RemoteHandlerAbstract {
|
||||
return timeout.call(this._getInfo(), this._timeout)
|
||||
}
|
||||
|
||||
// when using encryption, the file size is aligned with the encryption block size ( 16 bytes )
|
||||
// that means that the size will be 1 to 16 bytes more than the content size + the initialized vector length (16 bytes)
|
||||
async getSize(file) {
|
||||
return timeout.call(this._getSize(typeof file === 'string' ? normalizePath(file) : file), this._timeout)
|
||||
assert.strictEqual(this.isEncrypted, false, `Can't compute size of an encrypted file ${file}`)
|
||||
|
||||
const size = await timeout.call(this._getSize(typeof file === 'string' ? normalizePath(file) : file), this._timeout)
|
||||
return size - this._encryptor.ivLength
|
||||
}
|
||||
|
||||
async list(dir, { filter, ignoreMissing = false, prependDir = false } = {}) {
|
||||
@@ -283,15 +263,18 @@ export default class RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async outputFile(file, data, { dirMode, flags = 'wx' } = {}) {
|
||||
await this._outputFile(normalizePath(file), data, { dirMode, flags })
|
||||
const encryptedData = this._encryptor.encryptData(data)
|
||||
await this._outputFile(normalizePath(file), encryptedData, { dirMode, flags })
|
||||
}
|
||||
|
||||
async read(file, buffer, position) {
|
||||
assert.strictEqual(this.isEncrypted, false, `Can't read part of an encrypted file ${file}`)
|
||||
return this._read(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
|
||||
}
|
||||
|
||||
async readFile(file, { flags = 'r' } = {}) {
|
||||
return this._readFile(normalizePath(file), { flags })
|
||||
const data = await this._readFile(normalizePath(file), { flags })
|
||||
return this._encryptor.decryptData(data)
|
||||
}
|
||||
|
||||
async rename(oldPath, newPath, { checksum = false } = {}) {
|
||||
@@ -331,6 +314,61 @@ export default class RemoteHandlerAbstract {
|
||||
@synchronized()
|
||||
async sync() {
|
||||
await this._sync()
|
||||
try {
|
||||
await this._checkMetadata()
|
||||
} catch (error) {
|
||||
await this._forget()
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
async _canWriteMetadata() {
|
||||
const list = await this.list('/', {
|
||||
filter: e => !e.startsWith('.') && e !== ENCRYPTION_DESC_FILENAME && e !== ENCRYPTION_METADATA_FILENAME,
|
||||
})
|
||||
return list.length === 0
|
||||
}
|
||||
|
||||
async _createMetadata() {
|
||||
await Promise.all([
|
||||
this._writeFile(
|
||||
normalizePath(ENCRYPTION_DESC_FILENAME),
|
||||
JSON.stringify({ algorithm: this._encryptor.algorithm }),
|
||||
{
|
||||
flags: 'w',
|
||||
}
|
||||
), // not encrypted
|
||||
this.writeFile(ENCRYPTION_METADATA_FILENAME, `{"random":"${randomUUID()}"}`, { flags: 'w' }), // encrypted
|
||||
])
|
||||
}
|
||||
|
||||
async _checkMetadata() {
|
||||
try {
|
||||
// this file is not encrypted
|
||||
const data = await this._readFile(normalizePath(ENCRYPTION_DESC_FILENAME))
|
||||
JSON.parse(data)
|
||||
} catch (error) {
|
||||
if (error.code !== 'ENOENT') {
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// this file is encrypted
|
||||
const data = await this.readFile(ENCRYPTION_METADATA_FILENAME)
|
||||
JSON.parse(data)
|
||||
} catch (error) {
|
||||
if (error.code === 'ENOENT' || (await this._canWriteMetadata())) {
|
||||
info('will update metadata of this remote')
|
||||
return this._createMetadata()
|
||||
}
|
||||
warn(
|
||||
`The encryptionKey settings of this remote does not match the key used to create it. You won't be able to read any data from this remote`,
|
||||
{ error }
|
||||
)
|
||||
// will probably send a ERR_OSSL_EVP_BAD_DECRYPT if key is incorrect
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
async test() {
|
||||
@@ -357,11 +395,12 @@ export default class RemoteHandlerAbstract {
|
||||
readRate: computeRate(readDuration, SIZE),
|
||||
}
|
||||
} catch (error) {
|
||||
warn(`error while testing the remote at step ${step}`, { error })
|
||||
return {
|
||||
success: false,
|
||||
step,
|
||||
file: testFileName,
|
||||
error: error.message || String(error),
|
||||
error,
|
||||
}
|
||||
} finally {
|
||||
ignoreErrors.call(this._unlink(testFileName))
|
||||
@@ -383,11 +422,13 @@ export default class RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async write(file, buffer, position) {
|
||||
assert.strictEqual(this.isEncrypted, false, `Can't write part of a file with encryption ${file}`)
|
||||
await this._write(typeof file === 'string' ? normalizePath(file) : file, buffer, position)
|
||||
}
|
||||
|
||||
async writeFile(file, data, { flags = 'wx' } = {}) {
|
||||
await this._writeFile(normalizePath(file), data, { flags })
|
||||
const encryptedData = this._encryptor.encryptData(data)
|
||||
await this._writeFile(normalizePath(file), encryptedData, { flags })
|
||||
}
|
||||
|
||||
// Methods that can be called by private methods to avoid parallel limit on public methods
|
||||
@@ -420,6 +461,10 @@ export default class RemoteHandlerAbstract {
|
||||
|
||||
// Methods that can be implemented by inheriting classes
|
||||
|
||||
useVhdDirectory() {
|
||||
return this._remote.useVhdDirectory ?? false
|
||||
}
|
||||
|
||||
async _closeFile(fd) {
|
||||
throw new Error('Not implemented')
|
||||
}
|
||||
@@ -502,9 +547,13 @@ export default class RemoteHandlerAbstract {
|
||||
|
||||
async _outputStream(path, input, { dirMode, validator }) {
|
||||
const tmpPath = `${dirname(path)}/.${basename(path)}`
|
||||
const output = await this.createOutputStream(tmpPath, {
|
||||
dirMode,
|
||||
})
|
||||
const output = await timeout.call(
|
||||
this._createOutputStream(tmpPath, {
|
||||
dirMode,
|
||||
flags: 'wx',
|
||||
}),
|
||||
this._timeout
|
||||
)
|
||||
try {
|
||||
await fromCallback(pipeline, input, output)
|
||||
if (validator !== undefined) {
|
||||
@@ -587,6 +636,10 @@ export default class RemoteHandlerAbstract {
|
||||
async _writeFile(file, data, options) {
|
||||
throw new Error('Not implemented')
|
||||
}
|
||||
|
||||
get isEncrypted() {
|
||||
return this._encryptor.id !== 'NULL_ENCRYPTOR'
|
||||
}
|
||||
}
|
||||
|
||||
function createPrefixWrapperMethods() {
|
||||
|
||||
@@ -30,18 +30,6 @@ describe('closeFile()', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('createOutputStream()', () => {
|
||||
it(`throws in case of timeout`, async () => {
|
||||
const testHandler = new TestHandler({
|
||||
createOutputStream: () => new Promise(() => {}),
|
||||
})
|
||||
|
||||
const promise = testHandler.createOutputStream('File')
|
||||
jest.advanceTimersByTime(TIMEOUT)
|
||||
await expect(promise).rejects.toThrowError(TimeoutError)
|
||||
})
|
||||
})
|
||||
|
||||
describe('getInfo()', () => {
|
||||
it('throws in case of timeout', async () => {
|
||||
const testHandler = new TestHandler({
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
/* eslint-env jest */
|
||||
|
||||
import 'dotenv/config'
|
||||
import asyncIteratorToStream from 'async-iterator-to-stream'
|
||||
import { forOwn, random } from 'lodash'
|
||||
import { fromCallback } from 'promise-toolbox'
|
||||
import { pipeline } from 'readable-stream'
|
||||
import { tmpdir } from 'os'
|
||||
|
||||
import { getHandler } from '.'
|
||||
@@ -27,9 +24,6 @@ const unsecureRandomBytes = n => {
|
||||
|
||||
const TEST_DATA_LEN = 1024
|
||||
const TEST_DATA = unsecureRandomBytes(TEST_DATA_LEN)
|
||||
const createTestDataStream = asyncIteratorToStream(function* () {
|
||||
yield TEST_DATA
|
||||
})
|
||||
|
||||
const rejectionOf = p =>
|
||||
p.then(
|
||||
@@ -82,14 +76,6 @@ handlers.forEach(url => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('#createOutputStream()', () => {
|
||||
it('creates parent dir if missing', async () => {
|
||||
const stream = await handler.createOutputStream('dir/file')
|
||||
await fromCallback(pipeline, createTestDataStream(), stream)
|
||||
await expect(await handler.readFile('dir/file')).toEqual(TEST_DATA)
|
||||
})
|
||||
})
|
||||
|
||||
describe('#getInfo()', () => {
|
||||
let info
|
||||
beforeAll(async () => {
|
||||
|
||||
@@ -5,7 +5,6 @@ import RemoteHandlerLocal from './local'
|
||||
import RemoteHandlerNfs from './nfs'
|
||||
import RemoteHandlerS3 from './s3'
|
||||
import RemoteHandlerSmb from './smb'
|
||||
import RemoteHandlerSmbMount from './smb-mount'
|
||||
|
||||
const HANDLERS = {
|
||||
file: RemoteHandlerLocal,
|
||||
@@ -15,10 +14,8 @@ const HANDLERS = {
|
||||
|
||||
try {
|
||||
execa.sync('mount.cifs', ['-V'])
|
||||
HANDLERS.smb = RemoteHandlerSmbMount
|
||||
} catch (_) {
|
||||
HANDLERS.smb = RemoteHandlerSmb
|
||||
}
|
||||
} catch (_) {}
|
||||
|
||||
export const getHandler = (remote, ...rest) => {
|
||||
const Handler = HANDLERS[parse(remote.url).type]
|
||||
|
||||
@@ -1,13 +1,38 @@
|
||||
import df from '@sindresorhus/df'
|
||||
import fs from 'fs-extra'
|
||||
import lockfile from 'proper-lockfile'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { fromEvent, retry } from 'promise-toolbox'
|
||||
|
||||
import RemoteHandlerAbstract from './abstract'
|
||||
|
||||
const { info, warn } = createLogger('xo:fs:local')
|
||||
|
||||
// save current stack trace and add it to any rejected error
|
||||
//
|
||||
// This is especially useful when the resolution is separate from the initial
|
||||
// call, which is often the case with RPC libs.
|
||||
//
|
||||
// There is a perf impact and it should be avoided in production.
|
||||
async function addSyncStackTrace(fn, ...args) {
|
||||
const stackContainer = new Error()
|
||||
try {
|
||||
return await fn.apply(this, args)
|
||||
} catch (error) {
|
||||
error.syncStack = stackContainer.stack
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
function dontAddSyncStackTrace(fn, ...args) {
|
||||
return fn.apply(this, args)
|
||||
}
|
||||
|
||||
export default class LocalHandler extends RemoteHandlerAbstract {
|
||||
constructor(remote, opts = {}) {
|
||||
super(remote)
|
||||
|
||||
this._addSyncStackTrace = opts.syncStackTraces ?? true ? addSyncStackTrace : dontAddSyncStackTrace
|
||||
this._retriesOnEagain = {
|
||||
delay: 1e3,
|
||||
retries: 9,
|
||||
@@ -30,17 +55,17 @@ export default class LocalHandler extends RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async _closeFile(fd) {
|
||||
return fs.close(fd)
|
||||
return this._addSyncStackTrace(fs.close, fd)
|
||||
}
|
||||
|
||||
async _copy(oldPath, newPath) {
|
||||
return fs.copy(this._getFilePath(oldPath), this._getFilePath(newPath))
|
||||
return this._addSyncStackTrace(fs.copy, this._getFilePath(oldPath), this._getFilePath(newPath))
|
||||
}
|
||||
|
||||
async _createReadStream(file, options) {
|
||||
if (typeof file === 'string') {
|
||||
const stream = fs.createReadStream(this._getFilePath(file), options)
|
||||
await fromEvent(stream, 'open')
|
||||
await this._addSyncStackTrace(fromEvent, stream, 'open')
|
||||
return stream
|
||||
}
|
||||
return fs.createReadStream('', {
|
||||
@@ -53,7 +78,7 @@ export default class LocalHandler extends RemoteHandlerAbstract {
|
||||
async _createWriteStream(file, options) {
|
||||
if (typeof file === 'string') {
|
||||
const stream = fs.createWriteStream(this._getFilePath(file), options)
|
||||
await fromEvent(stream, 'open')
|
||||
await this._addSyncStackTrace(fromEvent, stream, 'open')
|
||||
return stream
|
||||
}
|
||||
return fs.createWriteStream('', {
|
||||
@@ -79,71 +104,98 @@ export default class LocalHandler extends RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async _getSize(file) {
|
||||
const stats = await fs.stat(this._getFilePath(typeof file === 'string' ? file : file.path))
|
||||
const stats = await this._addSyncStackTrace(fs.stat, this._getFilePath(typeof file === 'string' ? file : file.path))
|
||||
return stats.size
|
||||
}
|
||||
|
||||
async _list(dir) {
|
||||
return fs.readdir(this._getFilePath(dir))
|
||||
return this._addSyncStackTrace(fs.readdir, this._getFilePath(dir))
|
||||
}
|
||||
|
||||
_lock(path) {
|
||||
return lockfile.lock(this._getFilePath(path))
|
||||
async _lock(path) {
|
||||
const acquire = lockfile.lock.bind(undefined, this._getFilePath(path), {
|
||||
async onCompromised(error) {
|
||||
warn('lock compromised', { error })
|
||||
try {
|
||||
release = await acquire()
|
||||
info('compromised lock was reacquired')
|
||||
} catch (error) {
|
||||
warn('compromised lock could not be reacquired', { error })
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
let release = await this._addSyncStackTrace(acquire)
|
||||
|
||||
return async () => {
|
||||
try {
|
||||
await this._addSyncStackTrace(release)
|
||||
} catch (error) {
|
||||
warn('lock could not be released', { error })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
_mkdir(dir, { mode }) {
|
||||
return fs.mkdir(this._getFilePath(dir), { mode })
|
||||
return this._addSyncStackTrace(fs.mkdir, this._getFilePath(dir), { mode })
|
||||
}
|
||||
|
||||
async _openFile(path, flags) {
|
||||
return fs.open(this._getFilePath(path), flags)
|
||||
return this._addSyncStackTrace(fs.open, this._getFilePath(path), flags)
|
||||
}
|
||||
|
||||
async _read(file, buffer, position) {
|
||||
const needsClose = typeof file === 'string'
|
||||
file = needsClose ? await fs.open(this._getFilePath(file), 'r') : file.fd
|
||||
file = needsClose ? await this._addSyncStackTrace(fs.open, this._getFilePath(file), 'r') : file.fd
|
||||
try {
|
||||
return await fs.read(file, buffer, 0, buffer.length, position === undefined ? null : position)
|
||||
return await this._addSyncStackTrace(
|
||||
fs.read,
|
||||
file,
|
||||
buffer,
|
||||
0,
|
||||
buffer.length,
|
||||
position === undefined ? null : position
|
||||
)
|
||||
} finally {
|
||||
if (needsClose) {
|
||||
await fs.close(file)
|
||||
await this._addSyncStackTrace(fs.close, file)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async _readFile(file, options) {
|
||||
const filePath = this._getFilePath(file)
|
||||
return await retry(() => fs.readFile(filePath, options), this._retriesOnEagain)
|
||||
return await this._addSyncStackTrace(retry, () => fs.readFile(filePath, options), this._retriesOnEagain)
|
||||
}
|
||||
|
||||
async _rename(oldPath, newPath) {
|
||||
return fs.rename(this._getFilePath(oldPath), this._getFilePath(newPath))
|
||||
return this._addSyncStackTrace(fs.rename, this._getFilePath(oldPath), this._getFilePath(newPath))
|
||||
}
|
||||
|
||||
async _rmdir(dir) {
|
||||
return fs.rmdir(this._getFilePath(dir))
|
||||
return this._addSyncStackTrace(fs.rmdir, this._getFilePath(dir))
|
||||
}
|
||||
|
||||
async _sync() {
|
||||
const path = this._getRealPath('/')
|
||||
await fs.ensureDir(path)
|
||||
await fs.access(path, fs.R_OK | fs.W_OK)
|
||||
await this._addSyncStackTrace(fs.ensureDir, path)
|
||||
await this._addSyncStackTrace(fs.access, path, fs.R_OK | fs.W_OK)
|
||||
}
|
||||
|
||||
_truncate(file, len) {
|
||||
return fs.truncate(this._getFilePath(file), len)
|
||||
return this._addSyncStackTrace(fs.truncate, this._getFilePath(file), len)
|
||||
}
|
||||
|
||||
async _unlink(file) {
|
||||
const filePath = this._getFilePath(file)
|
||||
return await retry(() => fs.unlink(filePath), this._retriesOnEagain)
|
||||
return await this._addSyncStackTrace(retry, () => fs.unlink(filePath), this._retriesOnEagain)
|
||||
}
|
||||
|
||||
_writeFd(file, buffer, position) {
|
||||
return fs.write(file.fd, buffer, 0, buffer.length, position)
|
||||
return this._addSyncStackTrace(fs.write, file.fd, buffer, 0, buffer.length, position)
|
||||
}
|
||||
|
||||
_writeFile(file, data, { flags }) {
|
||||
return fs.writeFile(this._getFilePath(file), data, { flag: flags })
|
||||
return this._addSyncStackTrace(fs.writeFile, this._getFilePath(file), data, { flag: flags })
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import path from 'path'
|
||||
|
||||
const { basename, dirname, join, resolve, sep } = path.posix
|
||||
const { basename, dirname, join, resolve, relative, sep } = path.posix
|
||||
|
||||
export { basename, dirname, join }
|
||||
|
||||
@@ -19,3 +19,6 @@ export function split(path) {
|
||||
|
||||
return parts
|
||||
}
|
||||
|
||||
export const relativeFromFile = (file, path) => relative(dirname(file), path)
|
||||
export const resolveFromFile = (file, path) => resolve('/', dirname(file), path).slice(1)
|
||||
@@ -14,6 +14,7 @@ import {
|
||||
} from '@aws-sdk/client-s3'
|
||||
import { Upload } from '@aws-sdk/lib-storage'
|
||||
import { NodeHttpHandler } from '@aws-sdk/node-http-handler'
|
||||
import { getApplyMd5BodyChecksumPlugin } from '@aws-sdk/middleware-apply-body-checksum'
|
||||
import assert from 'assert'
|
||||
import { Agent as HttpAgent } from 'http'
|
||||
import { Agent as HttpsAgent } from 'https'
|
||||
@@ -26,7 +27,7 @@ import copyStreamToBuffer from './_copyStreamToBuffer.js'
|
||||
import createBufferFromStream from './_createBufferFromStream.js'
|
||||
import guessAwsRegion from './_guessAwsRegion.js'
|
||||
import RemoteHandlerAbstract from './abstract'
|
||||
import { basename, join, split } from './_path'
|
||||
import { basename, join, split } from './path'
|
||||
import { asyncEach } from '@vates/async-each'
|
||||
|
||||
// endpoints https://docs.aws.amazon.com/general/latest/gr/s3.html
|
||||
@@ -75,6 +76,9 @@ export default class S3Handler extends RemoteHandlerAbstract {
|
||||
}),
|
||||
})
|
||||
|
||||
// Workaround for https://github.com/aws/aws-sdk-js-v3/issues/2673
|
||||
this._s3.middlewareStack.use(getApplyMd5BodyChecksumPlugin(this._s3.config))
|
||||
|
||||
const parts = split(path)
|
||||
this._bucket = parts.shift()
|
||||
this._dir = join(...parts)
|
||||
@@ -93,7 +97,12 @@ export default class S3Handler extends RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
_makePrefix(dir) {
|
||||
return join(this._dir, dir, '/')
|
||||
const prefix = join(this._dir, dir, '/')
|
||||
|
||||
// no prefix for root
|
||||
if (prefix !== './') {
|
||||
return prefix
|
||||
}
|
||||
}
|
||||
|
||||
_createParams(file) {
|
||||
@@ -146,6 +155,14 @@ export default class S3Handler extends RemoteHandlerAbstract {
|
||||
if (e.name === 'EntityTooLarge') {
|
||||
return this._multipartCopy(oldPath, newPath)
|
||||
}
|
||||
// normalize this error code
|
||||
if (e.name === 'NoSuchKey') {
|
||||
const error = new Error(`ENOENT: no such file or directory '${oldPath}'`)
|
||||
error.cause = e
|
||||
error.code = 'ENOENT'
|
||||
error.path = oldPath
|
||||
throw error
|
||||
}
|
||||
throw e
|
||||
}
|
||||
}
|
||||
@@ -226,14 +243,17 @@ export default class S3Handler extends RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async _createReadStream(path, options) {
|
||||
if (!(await this._isFile(path))) {
|
||||
const error = new Error(`ENOENT: no such file '${path}'`)
|
||||
error.code = 'ENOENT'
|
||||
error.path = path
|
||||
throw error
|
||||
try {
|
||||
return (await this._s3.send(new GetObjectCommand(this._createParams(path)))).Body
|
||||
} catch (e) {
|
||||
if (e.name === 'NoSuchKey') {
|
||||
const error = new Error(`ENOENT: no such file '${path}'`)
|
||||
error.code = 'ENOENT'
|
||||
error.path = path
|
||||
throw error
|
||||
}
|
||||
throw e
|
||||
}
|
||||
|
||||
return (await this._s3.send(new GetObjectCommand(this._createParams(path)))).Body
|
||||
}
|
||||
|
||||
async _unlink(path) {
|
||||
@@ -513,4 +533,8 @@ export default class S3Handler extends RemoteHandlerAbstract {
|
||||
}
|
||||
|
||||
async _closeFile(fd) {}
|
||||
|
||||
useVhdDirectory() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
import { parse } from 'xo-remote-parser'
|
||||
|
||||
import MountHandler from './_mount'
|
||||
import { normalize } from './_path'
|
||||
|
||||
export default class SmbMountHandler extends MountHandler {
|
||||
constructor(remote, opts) {
|
||||
const { domain = 'WORKGROUP', host, password, path, username } = parse(remote.url)
|
||||
super(remote, opts, {
|
||||
type: 'cifs',
|
||||
device: '//' + host + normalize(path),
|
||||
options: `domain=${domain}`,
|
||||
env: {
|
||||
USER: username,
|
||||
PASSWD: password,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
get type() {
|
||||
return 'smb'
|
||||
}
|
||||
}
|
||||
@@ -1,163 +1,23 @@
|
||||
import Smb2 from '@marsaud/smb2'
|
||||
import { parse } from 'xo-remote-parser'
|
||||
|
||||
import RemoteHandlerAbstract from './abstract'
|
||||
import MountHandler from './_mount'
|
||||
import { normalize } from './path'
|
||||
|
||||
// Normalize the error code for file not found.
|
||||
const wrapError = (error, code) => ({
|
||||
__proto__: error,
|
||||
cause: error,
|
||||
code,
|
||||
})
|
||||
const normalizeError = (error, shouldBeDirectory) => {
|
||||
const { code } = error
|
||||
|
||||
throw code === 'STATUS_DIRECTORY_NOT_EMPTY'
|
||||
? wrapError(error, 'ENOTEMPTY')
|
||||
: code === 'STATUS_FILE_IS_A_DIRECTORY'
|
||||
? wrapError(error, 'EISDIR')
|
||||
: code === 'STATUS_NOT_A_DIRECTORY'
|
||||
? wrapError(error, 'ENOTDIR')
|
||||
: code === 'STATUS_OBJECT_NAME_NOT_FOUND' || code === 'STATUS_OBJECT_PATH_NOT_FOUND'
|
||||
? wrapError(error, 'ENOENT')
|
||||
: code === 'STATUS_OBJECT_NAME_COLLISION'
|
||||
? wrapError(error, 'EEXIST')
|
||||
: code === 'STATUS_NOT_SUPPORTED' || code === 'STATUS_INVALID_PARAMETER'
|
||||
? wrapError(error, shouldBeDirectory ? 'ENOTDIR' : 'EISDIR')
|
||||
: error
|
||||
}
|
||||
const normalizeDirError = error => normalizeError(error, true)
|
||||
|
||||
export default class SmbHandler extends RemoteHandlerAbstract {
|
||||
export default class SmbHandler extends MountHandler {
|
||||
constructor(remote, opts) {
|
||||
super(remote, opts)
|
||||
|
||||
// defined in _sync()
|
||||
this._client = undefined
|
||||
|
||||
const prefix = this._remote.path
|
||||
this._prefix = prefix !== '' ? prefix + '\\' : prefix
|
||||
const { domain = 'WORKGROUP', host, password, path, username } = parse(remote.url)
|
||||
super(remote, opts, {
|
||||
type: 'cifs',
|
||||
device: '//' + host + normalize(path),
|
||||
options: `domain=${domain}`,
|
||||
env: {
|
||||
USER: username,
|
||||
PASSWD: password,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
get type() {
|
||||
return 'smb'
|
||||
}
|
||||
|
||||
_getFilePath(file) {
|
||||
return this._prefix + (typeof file === 'string' ? file : file.path).slice(1).replace(/\//g, '\\')
|
||||
}
|
||||
|
||||
_dirname(file) {
|
||||
const parts = file.split('\\')
|
||||
parts.pop()
|
||||
return parts.join('\\')
|
||||
}
|
||||
|
||||
_closeFile(file) {
|
||||
return this._client.close(file).catch(normalizeError)
|
||||
}
|
||||
|
||||
_createReadStream(file, options) {
|
||||
if (typeof file === 'string') {
|
||||
file = this._getFilePath(file)
|
||||
} else {
|
||||
options = { autoClose: false, ...options, fd: file.fd }
|
||||
file = ''
|
||||
}
|
||||
return this._client.createReadStream(file, options).catch(normalizeError)
|
||||
}
|
||||
|
||||
_createWriteStream(file, options) {
|
||||
if (typeof file === 'string') {
|
||||
file = this._getFilePath(file)
|
||||
} else {
|
||||
options = { autoClose: false, ...options, fd: file.fd }
|
||||
file = ''
|
||||
}
|
||||
return this._client.createWriteStream(file, options).catch(normalizeError)
|
||||
}
|
||||
|
||||
_forget() {
|
||||
const client = this._client
|
||||
this._client = undefined
|
||||
return client.disconnect()
|
||||
}
|
||||
|
||||
_getSize(file) {
|
||||
return this._client.getSize(this._getFilePath(file)).catch(normalizeError)
|
||||
}
|
||||
|
||||
_list(dir) {
|
||||
return this._client.readdir(this._getFilePath(dir)).catch(normalizeDirError)
|
||||
}
|
||||
|
||||
_mkdir(dir, { mode }) {
|
||||
return this._client.mkdir(this._getFilePath(dir), mode).catch(normalizeDirError)
|
||||
}
|
||||
|
||||
// TODO: add flags
|
||||
_openFile(path, flags) {
|
||||
return this._client.open(this._getFilePath(path), flags).catch(normalizeError)
|
||||
}
|
||||
|
||||
async _read(file, buffer, position) {
|
||||
const client = this._client
|
||||
const needsClose = typeof file === 'string'
|
||||
file = needsClose ? await client.open(this._getFilePath(file)) : file.fd
|
||||
try {
|
||||
return await client.read(file, buffer, 0, buffer.length, position)
|
||||
} catch (error) {
|
||||
normalizeError(error)
|
||||
} finally {
|
||||
if (needsClose) {
|
||||
await client.close(file)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
_readFile(file, options) {
|
||||
return this._client.readFile(this._getFilePath(file), options).catch(normalizeError)
|
||||
}
|
||||
|
||||
_rename(oldPath, newPath) {
|
||||
return this._client
|
||||
.rename(this._getFilePath(oldPath), this._getFilePath(newPath), {
|
||||
replace: true,
|
||||
})
|
||||
.catch(normalizeError)
|
||||
}
|
||||
|
||||
_rmdir(dir) {
|
||||
return this._client.rmdir(this._getFilePath(dir)).catch(normalizeDirError)
|
||||
}
|
||||
|
||||
_sync() {
|
||||
const remote = this._remote
|
||||
|
||||
this._client = new Smb2({
|
||||
share: `\\\\${remote.host}`,
|
||||
domain: remote.domain,
|
||||
username: remote.username,
|
||||
password: remote.password,
|
||||
autoCloseTimeout: 0,
|
||||
})
|
||||
|
||||
// Check access (smb2 does not expose connect in public so far...)
|
||||
return this.list('.')
|
||||
}
|
||||
|
||||
_truncate(file, len) {
|
||||
return this._client.truncate(this._getFilePath(file), len).catch(normalizeError)
|
||||
}
|
||||
|
||||
_unlink(file) {
|
||||
return this._client.unlink(this._getFilePath(file)).catch(normalizeError)
|
||||
}
|
||||
|
||||
_writeFd(file, buffer, position) {
|
||||
return this._client.write(file.fd, buffer, 0, buffer.length, position)
|
||||
}
|
||||
|
||||
_writeFile(file, data, options) {
|
||||
return this._client.writeFile(this._getFilePath(file), data, options).catch(normalizeError)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
'use strict'
|
||||
|
||||
const get = require('lodash/get')
|
||||
const identity = require('lodash/identity')
|
||||
const isEqual = require('lodash/isEqual')
|
||||
const { createLogger } = require('@xen-orchestra/log')
|
||||
const { parseDuration } = require('@vates/parse-duration')
|
||||
const { watch } = require('app-conf')
|
||||
import get from 'lodash/get.js'
|
||||
import identity from 'lodash/identity.js'
|
||||
import isEqual from 'lodash/isEqual.js'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { parseDuration } from '@vates/parse-duration'
|
||||
import { watch } from 'app-conf'
|
||||
|
||||
const { warn } = createLogger('xo:mixins:config')
|
||||
|
||||
module.exports = class Config {
|
||||
// if path is undefined, an empty string or an empty array, returns the root value
|
||||
const niceGet = (value, path) => (path === undefined || path.length === 0 ? value : get(value, path))
|
||||
|
||||
export default class Config {
|
||||
constructor(app, { appDir, appName, config }) {
|
||||
this._config = config
|
||||
const watchers = (this._watchers = new Set())
|
||||
@@ -32,7 +33,7 @@ module.exports = class Config {
|
||||
}
|
||||
|
||||
get(path) {
|
||||
const value = get(this._config, path)
|
||||
const value = niceGet(this._config, path)
|
||||
if (value === undefined) {
|
||||
throw new TypeError('missing config entry: ' + path)
|
||||
}
|
||||
@@ -44,20 +45,27 @@ module.exports = class Config {
|
||||
}
|
||||
|
||||
getOptional(path) {
|
||||
return get(this._config, path)
|
||||
return niceGet(this._config, path)
|
||||
}
|
||||
|
||||
watch(path, cb) {
|
||||
// short syntax for the whole config: watch(cb)
|
||||
if (typeof path === 'function') {
|
||||
cb = path
|
||||
path = undefined
|
||||
}
|
||||
|
||||
// internal arg
|
||||
const processor = arguments.length > 2 ? arguments[2] : identity
|
||||
|
||||
let prev
|
||||
const watcher = config => {
|
||||
try {
|
||||
const value = processor(get(config, path))
|
||||
const value = processor(niceGet(config, path))
|
||||
if (!isEqual(value, prev)) {
|
||||
const previous = prev
|
||||
prev = value
|
||||
cb(value)
|
||||
cb(value, previous, path)
|
||||
}
|
||||
} catch (error) {
|
||||
warn('watch', { error, path })
|
||||
@@ -1,51 +0,0 @@
|
||||
'use strict'
|
||||
|
||||
const assert = require('assert')
|
||||
const emitAsync = require('@xen-orchestra/emit-async')
|
||||
const EventEmitter = require('events')
|
||||
const { createLogger } = require('@xen-orchestra/log')
|
||||
|
||||
const { debug, warn } = createLogger('xo:mixins:hooks')
|
||||
|
||||
const runHook = async (emitter, hook) => {
|
||||
debug(`${hook} start…`)
|
||||
await emitAsync.call(
|
||||
emitter,
|
||||
{
|
||||
onError: error => warn(`${hook} failure`, { error }),
|
||||
},
|
||||
hook
|
||||
)
|
||||
debug(`${hook} finished`)
|
||||
}
|
||||
|
||||
module.exports = class Hooks extends EventEmitter {
|
||||
// Run *clean* async listeners.
|
||||
//
|
||||
// They normalize existing data, clear invalid entries, etc.
|
||||
clean() {
|
||||
return runHook(this, 'clean')
|
||||
}
|
||||
|
||||
_status = 'stopped'
|
||||
|
||||
// Run *start* async listeners.
|
||||
//
|
||||
// They initialize the application.
|
||||
async start() {
|
||||
assert.strictEqual(this._status, 'stopped')
|
||||
this._status = 'starting'
|
||||
await runHook(this, 'start')
|
||||
this.emit((this._status = 'started'))
|
||||
}
|
||||
|
||||
// Run *stop* async listeners.
|
||||
//
|
||||
// They close connections, unmount file systems, save states, etc.
|
||||
async stop() {
|
||||
assert.strictEqual(this._status, 'started')
|
||||
this._status = 'stopping'
|
||||
await runHook(this, 'stop')
|
||||
this.emit((this._status = 'stopped'))
|
||||
}
|
||||
}
|
||||
70
@xen-orchestra/mixins/Hooks.mjs
Normal file
70
@xen-orchestra/mixins/Hooks.mjs
Normal file
@@ -0,0 +1,70 @@
|
||||
import assert from 'assert'
|
||||
import emitAsync from '@xen-orchestra/emit-async'
|
||||
import EventEmitter from 'events'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
|
||||
const { debug, warn } = createLogger('xo:mixins:hooks')
|
||||
|
||||
const runHook = async (emitter, hook) => {
|
||||
debug(`${hook} start…`)
|
||||
await emitAsync.call(
|
||||
emitter,
|
||||
{
|
||||
onError: error => warn(`${hook} failure`, { error }),
|
||||
},
|
||||
hook
|
||||
)
|
||||
debug(`${hook} finished`)
|
||||
}
|
||||
|
||||
export default class Hooks extends EventEmitter {
|
||||
// Run *clean* async listeners.
|
||||
//
|
||||
// They normalize existing data, clear invalid entries, etc.
|
||||
clean() {
|
||||
return runHook(this, 'clean')
|
||||
}
|
||||
|
||||
_status = 'stopped'
|
||||
|
||||
// Run *start* async listeners.
|
||||
//
|
||||
// They initialize the application.
|
||||
//
|
||||
// *startCore* is automatically called if necessary.
|
||||
async start() {
|
||||
if (this._status === 'stopped') {
|
||||
await this.startCore()
|
||||
} else {
|
||||
assert.strictEqual(this._status, 'core started')
|
||||
}
|
||||
this._status = 'starting'
|
||||
await runHook(this, 'start')
|
||||
this.emit((this._status = 'started'))
|
||||
}
|
||||
|
||||
// Run *start core* async listeners.
|
||||
//
|
||||
// They initialize core features of the application (connect to databases,
|
||||
// etc.) and should be fast and side-effects free.
|
||||
async startCore() {
|
||||
assert.strictEqual(this._status, 'stopped')
|
||||
this._status = 'starting core'
|
||||
await runHook(this, 'start core')
|
||||
this.emit((this._status = 'core started'))
|
||||
}
|
||||
|
||||
// Run *stop* async listeners if necessary and *stop core* listeners.
|
||||
//
|
||||
// They close connections, unmount file systems, save states, etc.
|
||||
async stop() {
|
||||
if (this._status !== 'core started') {
|
||||
assert.strictEqual(this._status, 'started')
|
||||
this._status = 'stopping'
|
||||
await runHook(this, 'stop')
|
||||
this._status = 'core started'
|
||||
}
|
||||
await runHook(this, 'stop core')
|
||||
this.emit((this._status = 'stopped'))
|
||||
}
|
||||
}
|
||||
144
@xen-orchestra/mixins/HttpProxy.mjs
Normal file
144
@xen-orchestra/mixins/HttpProxy.mjs
Normal file
@@ -0,0 +1,144 @@
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { EventListenersManager } from '@vates/event-listeners-manager'
|
||||
import { pipeline } from 'stream'
|
||||
import { ServerResponse, request } from 'http'
|
||||
import assert from 'assert'
|
||||
import fromCallback from 'promise-toolbox/fromCallback'
|
||||
import fromEvent from 'promise-toolbox/fromEvent'
|
||||
import net from 'net'
|
||||
|
||||
import { parseBasicAuth } from './_parseBasicAuth.mjs'
|
||||
|
||||
const { debug, warn } = createLogger('xo:mixins:HttpProxy')
|
||||
|
||||
const IGNORED_HEADERS = new Set([
|
||||
// https://datatracker.ietf.org/doc/html/rfc2616#section-13.5.1
|
||||
'connection',
|
||||
'keep-alive',
|
||||
'proxy-authenticate',
|
||||
'proxy-authorization',
|
||||
'te',
|
||||
'trailers',
|
||||
'transfer-encoding',
|
||||
'upgrade',
|
||||
|
||||
// don't forward original host
|
||||
'host',
|
||||
])
|
||||
|
||||
export default class HttpProxy {
|
||||
#app
|
||||
|
||||
constructor(app, { httpServer }) {
|
||||
// don't setup the proxy if httpServer is not present
|
||||
//
|
||||
// that can happen when the app is instanciated in another context like xo-server-recover-account
|
||||
if (httpServer === undefined) {
|
||||
return
|
||||
}
|
||||
|
||||
this.#app = app
|
||||
|
||||
const events = new EventListenersManager(httpServer)
|
||||
app.config.watch('http.proxy.enabled', (enabled = true) => {
|
||||
events.removeAll()
|
||||
if (enabled) {
|
||||
events.add('connect', this.#handleConnect.bind(this)).add('request', this.#handleRequest.bind(this))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
async #handleAuthentication(req, res, next) {
|
||||
const auth = parseBasicAuth(req.headers['proxy-authorization'])
|
||||
|
||||
let authenticated = false
|
||||
|
||||
if (auth !== undefined) {
|
||||
const app = this.#app
|
||||
|
||||
if (app.authenticateUser !== undefined) {
|
||||
// xo-server
|
||||
try {
|
||||
const { user } = await app.authenticateUser(auth)
|
||||
authenticated = user.permission === 'admin'
|
||||
} catch (error) {}
|
||||
} else {
|
||||
// xo-proxy
|
||||
authenticated = (await app.authentication.findProfile(auth)) !== undefined
|
||||
}
|
||||
}
|
||||
|
||||
if (authenticated) {
|
||||
return next()
|
||||
}
|
||||
|
||||
// https://datatracker.ietf.org/doc/html/rfc7235#section-3.2
|
||||
res.statusCode = '407'
|
||||
res.setHeader('proxy-authenticate', 'Basic realm="proxy"')
|
||||
return res.end('Proxy Authentication Required')
|
||||
}
|
||||
|
||||
// https://nodejs.org/api/http.html#event-connect
|
||||
async #handleConnect(req, clientSocket, head) {
|
||||
const { url } = req
|
||||
|
||||
debug('CONNECT proxy', { url })
|
||||
|
||||
// https://github.com/TooTallNate/proxy/blob/d677ef31fd4ca9f7e868b34c18b9cb22b0ff69da/proxy.js#L391-L398
|
||||
const res = new ServerResponse(req)
|
||||
res.assignSocket(clientSocket)
|
||||
|
||||
try {
|
||||
await this.#handleAuthentication(req, res, async () => {
|
||||
const { port, hostname } = new URL('http://' + req.url)
|
||||
const serverSocket = net.connect(port || 80, hostname)
|
||||
|
||||
await fromEvent(serverSocket, 'connect')
|
||||
|
||||
clientSocket.write('HTTP/1.1 200 Connection Established\r\n\r\n')
|
||||
serverSocket.write(head)
|
||||
fromCallback(pipeline, clientSocket, serverSocket).catch(warn)
|
||||
fromCallback(pipeline, serverSocket, clientSocket).catch(warn)
|
||||
})
|
||||
} catch (error) {
|
||||
warn(error)
|
||||
clientSocket.end()
|
||||
}
|
||||
}
|
||||
|
||||
async #handleRequest(req, res) {
|
||||
const { url } = req
|
||||
|
||||
if (url.startsWith('/')) {
|
||||
// not a proxy request
|
||||
return
|
||||
}
|
||||
|
||||
debug('HTTP proxy', { url })
|
||||
|
||||
try {
|
||||
assert(url.startsWith('http:'), 'HTTPS should use connect')
|
||||
|
||||
await this.#handleAuthentication(req, res, async () => {
|
||||
const { headers } = req
|
||||
const pHeaders = {}
|
||||
for (const key of Object.keys(headers)) {
|
||||
if (!IGNORED_HEADERS.has(key)) {
|
||||
pHeaders[key] = headers[key]
|
||||
}
|
||||
}
|
||||
|
||||
const pReq = request(url, { headers: pHeaders, method: req.method })
|
||||
fromCallback(pipeline, req, pReq).catch(warn)
|
||||
|
||||
const pRes = await fromEvent(pReq, 'response')
|
||||
res.writeHead(pRes.statusCode, pRes.statusMessage, pRes.headers)
|
||||
await fromCallback(pipeline, pRes, res)
|
||||
})
|
||||
} catch (error) {
|
||||
res.statusCode = 500
|
||||
res.end('Internal Server Error')
|
||||
warn(error)
|
||||
}
|
||||
}
|
||||
}
|
||||
214
@xen-orchestra/mixins/SslCertificate.mjs
Normal file
214
@xen-orchestra/mixins/SslCertificate.mjs
Normal file
@@ -0,0 +1,214 @@
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { createSecureContext } from 'tls'
|
||||
import { dirname } from 'node:path'
|
||||
import { X509Certificate } from 'node:crypto'
|
||||
import acme from 'acme-client'
|
||||
import fs from 'node:fs/promises'
|
||||
import get from 'lodash/get.js'
|
||||
|
||||
const { debug, info, warn } = createLogger('xo:mixins:sslCertificate')
|
||||
|
||||
acme.setLogger(message => {
|
||||
debug(message)
|
||||
})
|
||||
|
||||
// - create any missing parent directories
|
||||
// - replace existing files
|
||||
// - secure permissions (read-only for the owner)
|
||||
async function outputFile(path, content) {
|
||||
await fs.mkdir(dirname(path), { recursive: true })
|
||||
try {
|
||||
await fs.unlink(path)
|
||||
} catch (error) {
|
||||
if (error.code !== 'ENOENT') {
|
||||
throw error
|
||||
}
|
||||
}
|
||||
await fs.writeFile(path, content, { flag: 'wx', mode: 0o400 })
|
||||
}
|
||||
|
||||
// from https://github.com/publishlab/node-acme-client/blob/master/examples/auto.js
|
||||
class SslCertificate {
|
||||
#cert
|
||||
#challengeCreateFn
|
||||
#challengeRemoveFn
|
||||
#delayBeforeRenewal = 30 * 24 * 60 * 60 * 1000 // 30 days
|
||||
#secureContext
|
||||
#updateSslCertificatePromise
|
||||
|
||||
constructor({ challengeCreateFn, challengeRemoveFn }, cert, key) {
|
||||
this.#challengeCreateFn = challengeCreateFn
|
||||
this.#challengeRemoveFn = challengeRemoveFn
|
||||
|
||||
this.#set(cert, key)
|
||||
}
|
||||
|
||||
get #isValid() {
|
||||
const cert = this.#cert
|
||||
return cert !== undefined && Date.parse(cert.validTo) > Date.now() && cert.issuer !== cert.subject
|
||||
}
|
||||
|
||||
get #shouldBeRenewed() {
|
||||
return !(this.#isValid && Date.parse(this.#cert.validTo) > Date.now() + this.#delayBeforeRenewal)
|
||||
}
|
||||
|
||||
#set(cert, key) {
|
||||
this.#cert = new X509Certificate(cert)
|
||||
this.#secureContext = createSecureContext({ cert, key })
|
||||
}
|
||||
|
||||
async getSecureContext(config) {
|
||||
if (!this.#shouldBeRenewed) {
|
||||
return this.#secureContext
|
||||
}
|
||||
|
||||
if (this.#updateSslCertificatePromise === undefined) {
|
||||
// not currently updating certificate
|
||||
//
|
||||
// ensure we only refresh certificate once at a time
|
||||
//
|
||||
// promise is cleaned by #updateSslCertificate itself
|
||||
this.#updateSslCertificatePromise = this.#updateSslCertificate(config)
|
||||
}
|
||||
|
||||
// old certificate is still here, return it while updating
|
||||
if (this.#isValid) {
|
||||
return this.#secureContext
|
||||
}
|
||||
|
||||
return this.#updateSslCertificatePromise
|
||||
}
|
||||
|
||||
async #save(certPath, cert, keyPath, key) {
|
||||
try {
|
||||
await Promise.all([outputFile(keyPath, key), outputFile(certPath, cert)])
|
||||
info('new certificate generated', { cert: certPath, key: keyPath })
|
||||
} catch (error) {
|
||||
warn(`couldn't write let's encrypt certificates to disk `, { error })
|
||||
}
|
||||
}
|
||||
|
||||
async #updateSslCertificate(config) {
|
||||
const { cert: certPath, key: keyPath, acmeEmail, acmeDomain } = config
|
||||
try {
|
||||
let { acmeCa = 'letsencrypt/production' } = config
|
||||
if (!(acmeCa.startsWith('http:') || acmeCa.startsWith('https:'))) {
|
||||
acmeCa = get(acme.directory, acmeCa.split('/'))
|
||||
}
|
||||
|
||||
/* Init client */
|
||||
const client = new acme.Client({
|
||||
directoryUrl: acmeCa,
|
||||
accountKey: await acme.crypto.createPrivateKey(),
|
||||
})
|
||||
|
||||
/* Create CSR */
|
||||
let [key, csr] = await acme.crypto.createCsr({
|
||||
commonName: acmeDomain,
|
||||
})
|
||||
csr = csr.toString()
|
||||
key = key.toString()
|
||||
debug('Successfully generated key and csr')
|
||||
|
||||
/* Certificate */
|
||||
const cert = await client.auto({
|
||||
challengeCreateFn: this.#challengeCreateFn,
|
||||
challengePriority: ['http-01'],
|
||||
challengeRemoveFn: this.#challengeRemoveFn,
|
||||
csr,
|
||||
email: acmeEmail,
|
||||
skipChallengeVerification: true,
|
||||
termsOfServiceAgreed: true,
|
||||
})
|
||||
debug('Successfully generated certificate')
|
||||
|
||||
this.#set(cert, key)
|
||||
|
||||
// don't wait for this
|
||||
this.#save(certPath, cert, keyPath, key)
|
||||
|
||||
return this.#secureContext
|
||||
} catch (error) {
|
||||
warn(`couldn't renew ssl certificate`, { acmeDomain, error })
|
||||
} finally {
|
||||
this.#updateSslCertificatePromise = undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export default class SslCertificates {
|
||||
#app
|
||||
#challenges = new Map()
|
||||
#challengeHandlers = {
|
||||
challengeCreateFn: (authz, challenge, keyAuthorization) => {
|
||||
this.#challenges.set(challenge.token, keyAuthorization)
|
||||
},
|
||||
challengeRemoveFn: (authz, challenge, keyAuthorization) => {
|
||||
this.#challenges.delete(challenge.token)
|
||||
},
|
||||
}
|
||||
#handlers = new Map()
|
||||
|
||||
constructor(app, { httpServer }) {
|
||||
// don't setup the proxy if httpServer is not present
|
||||
//
|
||||
// that can happen when the app is instanciated in another context like xo-server-recover-account
|
||||
if (httpServer === undefined) {
|
||||
return
|
||||
}
|
||||
const prefix = '/.well-known/acme-challenge/'
|
||||
httpServer.on('request', (req, res) => {
|
||||
const { url } = req
|
||||
if (url.startsWith(prefix)) {
|
||||
const token = url.slice(prefix.length)
|
||||
this.#acmeChallendMiddleware(req, res, token)
|
||||
}
|
||||
})
|
||||
|
||||
this.#app = app
|
||||
|
||||
httpServer.getSecureContext = this.getSecureContext.bind(this)
|
||||
}
|
||||
|
||||
async getSecureContext(httpsDomainName, configKey, initialCert, initialKey) {
|
||||
const config = this.#app.config.get(['http', 'listen', configKey])
|
||||
const handlers = this.#handlers
|
||||
|
||||
const { acmeDomain } = config
|
||||
|
||||
// not a let's encrypt protected end point, sommething changed in the configuration
|
||||
if (acmeDomain === undefined) {
|
||||
handlers.delete(configKey)
|
||||
return
|
||||
}
|
||||
|
||||
// server has been access with another domain, don't use the certificate
|
||||
if (acmeDomain !== httpsDomainName) {
|
||||
return
|
||||
}
|
||||
|
||||
let handler = handlers.get(configKey)
|
||||
if (handler === undefined) {
|
||||
// register the handler for this domain
|
||||
handler = new SslCertificate(this.#challengeHandlers, initialCert, initialKey)
|
||||
handlers.set(configKey, handler)
|
||||
}
|
||||
return handler.getSecureContext(config)
|
||||
}
|
||||
|
||||
// middleware that will serve the http challenge to let's encrypt servers
|
||||
#acmeChallendMiddleware(req, res, token) {
|
||||
debug('fetching challenge for token ', token)
|
||||
const challenge = this.#challenges.get(token)
|
||||
debug('challenge content is ', challenge)
|
||||
if (challenge === undefined) {
|
||||
res.statusCode = 404
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
|
||||
res.write(challenge)
|
||||
res.end()
|
||||
debug('successfully answered challenge ')
|
||||
}
|
||||
}
|
||||
27
@xen-orchestra/mixins/_parseBasicAuth.mjs
Normal file
27
@xen-orchestra/mixins/_parseBasicAuth.mjs
Normal file
@@ -0,0 +1,27 @@
|
||||
const RE = /^\s*basic\s+(.+?)\s*$/i
|
||||
|
||||
export function parseBasicAuth(header) {
|
||||
if (header === undefined) {
|
||||
return
|
||||
}
|
||||
|
||||
const matches = RE.exec(header)
|
||||
if (matches === null) {
|
||||
return
|
||||
}
|
||||
|
||||
let credentials = Buffer.from(matches[1], 'base64').toString()
|
||||
|
||||
const i = credentials.indexOf(':')
|
||||
if (i === -1) {
|
||||
credentials = { token: credentials }
|
||||
} else {
|
||||
// https://datatracker.ietf.org/doc/html/rfc3986#section-3.2.1
|
||||
credentials = {
|
||||
username: credentials.slice(0, i),
|
||||
password: credentials.slice(i + 1),
|
||||
}
|
||||
}
|
||||
|
||||
return credentials
|
||||
}
|
||||
74
@xen-orchestra/mixins/docs/HttpProxy.md
Normal file
74
@xen-orchestra/mixins/docs/HttpProxy.md
Normal file
@@ -0,0 +1,74 @@
|
||||
> This module provides an HTTP and HTTPS proxy for `xo-proxy` and `xo-server`.
|
||||
|
||||
- [Set up](#set-up)
|
||||
- [Usage](#usage)
|
||||
- [`xo-proxy`](#xo-proxy)
|
||||
- [`xo-server`](#xo-server)
|
||||
- [Use cases](#use-cases)
|
||||
- [Access hosts in a private network](#access-hosts-in-a-private-network)
|
||||
- [Allow upgrading xo-proxy via xo-server](#allow-upgrading-xo-proxy-via-xo-server)
|
||||
|
||||
## Set up
|
||||
|
||||
The proxy is enabled by default, to disable it, add the following lines to your config:
|
||||
|
||||
```toml
|
||||
[http.proxy]
|
||||
enabled = false
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
For safety reasons, the proxy requires authentication to be used.
|
||||
|
||||
### `xo-proxy`
|
||||
|
||||
Use the authentication token:
|
||||
|
||||
```
|
||||
$ cat ~/.config/xo-proxy/config.z-auto.json
|
||||
{"authenticationToken":"J0BgKritQgPxoyZrBJ5ViafQfLk06YoyFwC3fmfO5wU"}
|
||||
```
|
||||
|
||||
Proxy URL to use:
|
||||
|
||||
```
|
||||
https://J0BgKritQgPxoyZrBJ5ViafQfLk06YoyFwC3fmfO5wU@xo-proxy.company.lan
|
||||
```
|
||||
|
||||
### `xo-server`
|
||||
|
||||
> Only available for admin users.
|
||||
|
||||
You can use your credentials:
|
||||
|
||||
```
|
||||
https://user:password@xo.company.lan
|
||||
```
|
||||
|
||||
Or create a dedicated token with `xo-cli`:
|
||||
|
||||
```
|
||||
$ xo-cli --createToken xoa.company.lan admin@admin.net
|
||||
Password: ********
|
||||
Successfully logged with admin@admin.net
|
||||
Authentication token created
|
||||
|
||||
DiYBFavJwf9GODZqQJs23eAx9eh3KlsRhBi8RcoX0KM
|
||||
```
|
||||
|
||||
And use it in the URL:
|
||||
|
||||
```
|
||||
https://DiYBFavJwf9GODZqQJs23eAx9eh3KlsRhBi8RcoX0KM@xo.company.lan
|
||||
```
|
||||
|
||||
## Use cases
|
||||
|
||||
### Access hosts in a private network
|
||||
|
||||
To access hosts in a private network, deploy an XO Proxy in this network, expose its port 443 and use it as an HTTP proxy to connect to your servers in XO.
|
||||
|
||||
### Allow upgrading xo-proxy via xo-server
|
||||
|
||||
If your xo-proxy does not have direct Internet access, you can use xo-server as an HTTP proxy to make upgrades possible.
|
||||
49
@xen-orchestra/mixins/docs/SslCertificate.md
Normal file
49
@xen-orchestra/mixins/docs/SslCertificate.md
Normal file
@@ -0,0 +1,49 @@
|
||||
> This module provides [Let's Encrypt](https://letsencrypt.org/) integration to `xo-proxy` and `xo-server`.
|
||||
|
||||
First of all, make sure your server is listening on HTTP on port 80 and on HTTPS 443.
|
||||
|
||||
In `xo-server`, to avoid HTTP access, enable the redirection to HTTPs:
|
||||
|
||||
```toml
|
||||
[http]
|
||||
redirectToHttps = true
|
||||
```
|
||||
|
||||
Your server must be reachable with the configured domain to the certificate provider (e.g. Let's Encrypt), it usually means publicly reachable.
|
||||
|
||||
Finally, add the following entries to your HTTPS configuration.
|
||||
|
||||
```toml
|
||||
# Must be set to true for this feature
|
||||
autoCert = true
|
||||
|
||||
# These entries are required and indicates where the certificate and the
|
||||
# private key will be saved.
|
||||
cert = 'path/to/cert.pem'
|
||||
key = 'path/to/key.pem'
|
||||
|
||||
# ACME (e.g. Let's Encrypt, ZeroSSL) CA directory
|
||||
#
|
||||
# Specifies the URL to the ACME CA's directory.
|
||||
#
|
||||
# A identifier `provider/directory` can be passed instead of a URL, see the
|
||||
# list of supported directories here: https://www.npmjs.com/package/acme-client#directory-urls
|
||||
#
|
||||
# Note that the application cannot detect that this value has changed.
|
||||
#
|
||||
# In that case delete the certificate and the key files, and restart the
|
||||
# application to generate new ones.
|
||||
#
|
||||
# Default is 'letsencrypt/production'
|
||||
acmeCa = 'zerossl/production'
|
||||
|
||||
# Domain for which the certificate should be created.
|
||||
#
|
||||
# This entry is required.
|
||||
acmeDomain = 'my.domain.net'
|
||||
|
||||
# Optional email address which will be used for the certificate creation.
|
||||
#
|
||||
# It will be notified of any issues.
|
||||
acmeEmail = 'admin@my.domain.net'
|
||||
```
|
||||
@@ -14,16 +14,19 @@
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"version": "0.2.0",
|
||||
"version": "0.7.1",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
"node": ">=15.6"
|
||||
},
|
||||
"dependencies": {
|
||||
"@vates/event-listeners-manager": "^1.0.1",
|
||||
"@vates/parse-duration": "^0.1.1",
|
||||
"@xen-orchestra/emit-async": "^0.1.0",
|
||||
"@xen-orchestra/emit-async": "^1.0.0",
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"app-conf": "^2.0.0",
|
||||
"lodash": "^4.17.21"
|
||||
"acme-client": "^5.0.0",
|
||||
"app-conf": "^2.1.0",
|
||||
"lodash": "^4.17.21",
|
||||
"promise-toolbox": "^0.21.0"
|
||||
},
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public"
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"version": "0.1.1",
|
||||
"version": "0.1.2",
|
||||
"engines": {
|
||||
"node": ">=8.10"
|
||||
},
|
||||
@@ -30,7 +30,7 @@
|
||||
"rimraf": "^3.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@vates/read-chunk": "^0.1.2"
|
||||
"@vates/read-chunk": "^1.0.0"
|
||||
},
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
|
||||
@@ -1,25 +1,23 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
'use strict'
|
||||
import assert from 'assert'
|
||||
import colors from 'ansi-colors'
|
||||
import contentType from 'content-type'
|
||||
import CSON from 'cson-parser'
|
||||
import fromCallback from 'promise-toolbox/fromCallback'
|
||||
import fs from 'fs'
|
||||
import getopts from 'getopts'
|
||||
import hrp from 'http-request-plus'
|
||||
import split2 from 'split2'
|
||||
import pumpify from 'pumpify'
|
||||
import { extname } from 'path'
|
||||
import { format, parse } from 'json-rpc-protocol'
|
||||
import { inspect } from 'util'
|
||||
import { load as loadConfig } from 'app-conf'
|
||||
import { pipeline } from 'stream'
|
||||
import { readChunk } from '@vates/read-chunk'
|
||||
|
||||
const assert = require('assert')
|
||||
const colors = require('ansi-colors')
|
||||
const contentType = require('content-type')
|
||||
const CSON = require('cson-parser')
|
||||
const fromCallback = require('promise-toolbox/fromCallback')
|
||||
const fs = require('fs')
|
||||
const getopts = require('getopts')
|
||||
const hrp = require('http-request-plus')
|
||||
const split2 = require('split2')
|
||||
const pumpify = require('pumpify')
|
||||
const { extname, join } = require('path')
|
||||
const { format, parse } = require('json-rpc-protocol')
|
||||
const { inspect } = require('util')
|
||||
const { load: loadConfig } = require('app-conf')
|
||||
const { pipeline } = require('stream')
|
||||
const { readChunk } = require('@vates/read-chunk')
|
||||
|
||||
const pkg = require('./package.json')
|
||||
const pkg = JSON.parse(fs.readFileSync(new URL('package.json', import.meta.url)))
|
||||
|
||||
const FORMATS = {
|
||||
__proto__: null,
|
||||
@@ -32,30 +30,22 @@ const parseValue = value => (value.startsWith('json:') ? JSON.parse(value.slice(
|
||||
|
||||
async function main(argv) {
|
||||
const config = await loadConfig('xo-proxy', {
|
||||
appDir: join(__dirname, '..'),
|
||||
ignoreUnknownFormats: true,
|
||||
})
|
||||
|
||||
const { hostname = 'localhost', port } = config?.http?.listen?.https ?? {}
|
||||
|
||||
const {
|
||||
_: args,
|
||||
file,
|
||||
help,
|
||||
host,
|
||||
raw,
|
||||
token,
|
||||
} = getopts(argv, {
|
||||
const opts = getopts(argv, {
|
||||
alias: { file: 'f', help: 'h' },
|
||||
boolean: ['help', 'raw'],
|
||||
default: {
|
||||
token: config.authenticationToken,
|
||||
},
|
||||
stopEarly: true,
|
||||
string: ['file', 'host', 'token'],
|
||||
string: ['file', 'host', 'token', 'url'],
|
||||
})
|
||||
|
||||
if (help || (file === '' && args.length === 0)) {
|
||||
const { _: args, file } = opts
|
||||
|
||||
if (opts.help || (file === '' && args.length === 0)) {
|
||||
return console.log(
|
||||
'%s',
|
||||
`Usage:
|
||||
@@ -80,18 +70,29 @@ ${pkg.name} v${pkg.version}`
|
||||
const baseRequest = {
|
||||
headers: {
|
||||
'content-type': 'application/json',
|
||||
cookie: `authenticationToken=${token}`,
|
||||
},
|
||||
pathname: '/api/v1',
|
||||
protocol: 'https:',
|
||||
rejectUnauthorized: false,
|
||||
}
|
||||
if (host !== '') {
|
||||
baseRequest.host = host
|
||||
let { token } = opts
|
||||
if (opts.url !== '') {
|
||||
const { protocol, host, username } = new URL(opts.url)
|
||||
Object.assign(baseRequest, { protocol, host })
|
||||
if (username !== '') {
|
||||
token = username
|
||||
}
|
||||
} else {
|
||||
baseRequest.hostname = hostname
|
||||
baseRequest.port = port
|
||||
baseRequest.protocol = 'https:'
|
||||
if (opts.host !== '') {
|
||||
baseRequest.host = opts.host
|
||||
} else {
|
||||
const { hostname = 'localhost', port } = config?.http?.listen?.https ?? {}
|
||||
baseRequest.hostname = hostname
|
||||
baseRequest.port = port
|
||||
}
|
||||
}
|
||||
baseRequest.headers.cookie = `authenticationToken=${token}`
|
||||
|
||||
const call = async ({ method, params }) => {
|
||||
if (callPath.length !== 0) {
|
||||
process.stderr.write(`\n${colors.bold(`--- call #${callPath.join('.')}`)} ---\n\n`)
|
||||
@@ -130,7 +131,7 @@ ${pkg.name} v${pkg.version}`
|
||||
stdout.write(inspect(JSON.parse(line), { colors: true, depth: null }))
|
||||
stdout.write('\n')
|
||||
}
|
||||
} else if (raw && typeof result === 'string') {
|
||||
} else if (opts.raw && typeof result === 'string') {
|
||||
stdout.write(result)
|
||||
} else {
|
||||
stdout.write(inspect(result, { colors: true, depth: null }))
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"private": false,
|
||||
"name": "@xen-orchestra/proxy-cli",
|
||||
"version": "0.2.0",
|
||||
"version": "0.3.1",
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"description": "CLI for @xen-orchestra/proxy",
|
||||
"keywords": [
|
||||
@@ -19,16 +19,16 @@
|
||||
},
|
||||
"preferGlobal": true,
|
||||
"bin": {
|
||||
"xo-proxy-cli": "./index.js"
|
||||
"xo-proxy-cli": "./index.mjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
"node": ">=14.13"
|
||||
},
|
||||
"dependencies": {
|
||||
"@iarna/toml": "^2.2.0",
|
||||
"@vates/read-chunk": "^0.1.2",
|
||||
"@vates/read-chunk": "^1.0.0",
|
||||
"ansi-colors": "^4.1.1",
|
||||
"app-conf": "^2.0.0",
|
||||
"app-conf": "^2.1.0",
|
||||
"content-type": "^1.0.4",
|
||||
"cson-parser": "^4.0.7",
|
||||
"getopts": "^2.2.3",
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
import Config from '@xen-orchestra/mixins/Config.js'
|
||||
import Hooks from '@xen-orchestra/mixins/Hooks.js'
|
||||
import Config from '@xen-orchestra/mixins/Config.mjs'
|
||||
import Hooks from '@xen-orchestra/mixins/Hooks.mjs'
|
||||
import HttpProxy from '@xen-orchestra/mixins/HttpProxy.mjs'
|
||||
import SslCertificate from '@xen-orchestra/mixins/SslCertificate.mjs'
|
||||
import mixin from '@xen-orchestra/mixin'
|
||||
import { createDebounceResource } from '@vates/disposable/debounceResource.js'
|
||||
|
||||
@@ -13,7 +15,23 @@ import ReverseProxy from './mixins/reverseProxy.mjs'
|
||||
|
||||
export default class App {
|
||||
constructor(opts) {
|
||||
mixin(this, { Api, Appliance, Authentication, Backups, Config, Hooks, Logs, Remotes, ReverseProxy }, [opts])
|
||||
mixin(
|
||||
this,
|
||||
{
|
||||
Api,
|
||||
Appliance,
|
||||
Authentication,
|
||||
Backups,
|
||||
Config,
|
||||
Hooks,
|
||||
HttpProxy,
|
||||
Logs,
|
||||
Remotes,
|
||||
ReverseProxy,
|
||||
SslCertificate,
|
||||
},
|
||||
[opts]
|
||||
)
|
||||
|
||||
const debounceResource = createDebounceResource()
|
||||
this.config.watchDuration('resourceCacheDelay', delay => {
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { format, parse, MethodNotFound } from 'json-rpc-protocol'
|
||||
import { format, parse, MethodNotFound, JsonRpcError } from 'json-rpc-protocol'
|
||||
import * as errors from 'xo-common/api-errors.js'
|
||||
import Ajv from 'ajv'
|
||||
import asyncIteratorToStream from 'async-iterator-to-stream'
|
||||
@@ -9,6 +9,7 @@ import helmet from 'koa-helmet'
|
||||
import Koa from 'koa'
|
||||
import once from 'lodash/once.js'
|
||||
import Router from '@koa/router'
|
||||
import stubTrue from 'lodash/stubTrue.js'
|
||||
import Zone from 'node-zone'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
|
||||
@@ -52,7 +53,7 @@ export default class Api {
|
||||
ctx.req.setTimeout(0)
|
||||
|
||||
const profile = await app.authentication.findProfile({
|
||||
authenticationToken: ctx.cookies.get('authenticationToken'),
|
||||
token: ctx.cookies.get('authenticationToken'),
|
||||
})
|
||||
if (profile === undefined) {
|
||||
ctx.status = 401
|
||||
@@ -77,7 +78,19 @@ export default class Api {
|
||||
const { method, params } = body
|
||||
warn('call error', { method, params, error })
|
||||
ctx.set('Content-Type', 'application/json')
|
||||
ctx.body = format.error(body.id, error)
|
||||
|
||||
let e = error
|
||||
if (error != null && typeof error.toJsonRpcError !== 'function') {
|
||||
const { message, ...data } = error
|
||||
|
||||
// force these entries even if they are not enumerable
|
||||
data.code = error.code
|
||||
data.stack = error.stack
|
||||
|
||||
e = new JsonRpcError(error.message, undefined, data)
|
||||
}
|
||||
|
||||
ctx.body = format.error(body.id, e)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -166,14 +179,20 @@ export default class Api {
|
||||
throw errors.noSuchObject('method', name)
|
||||
}
|
||||
|
||||
const { description, params = {} } = method
|
||||
return { description, name, params }
|
||||
const { description, params = {}, result = {} } = method
|
||||
return { description, name, params, result }
|
||||
},
|
||||
{
|
||||
description: 'returns the signature of an API method',
|
||||
params: {
|
||||
method: { type: 'string' },
|
||||
},
|
||||
result: {
|
||||
description: { type: 'string' },
|
||||
name: { type: 'string' },
|
||||
params: { type: 'object' },
|
||||
result: { type: 'object' },
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
@@ -205,40 +224,29 @@ export default class Api {
|
||||
})
|
||||
}
|
||||
|
||||
addMethod(name, method, { description, params = {} } = {}) {
|
||||
addMethod(name, method, { description, params = {}, result: resultSchema } = {}) {
|
||||
const methods = this._methods
|
||||
|
||||
if (name in methods) {
|
||||
throw new Error(`API method ${name} already exists`)
|
||||
}
|
||||
|
||||
const ajv = this._ajv
|
||||
const validate = ajv.compile({
|
||||
// we want additional properties to be disabled by default
|
||||
additionalProperties: params['*'] || false,
|
||||
const validateParams = this.#compileSchema(params)
|
||||
const validateResult = this.#compileSchema(resultSchema)
|
||||
|
||||
properties: params,
|
||||
|
||||
// we want params to be required by default unless explicitly marked so
|
||||
// we use property `optional` instead of object `required`
|
||||
required: Object.keys(params).filter(name => {
|
||||
const param = params[name]
|
||||
const required = !param.optional
|
||||
delete param.optional
|
||||
return required
|
||||
}),
|
||||
|
||||
type: 'object',
|
||||
})
|
||||
|
||||
const m = params => {
|
||||
if (!validate(params)) {
|
||||
throw errors.invalidParameters(validate.errors)
|
||||
const m = async params => {
|
||||
if (!validateParams(params)) {
|
||||
throw errors.invalidParameters(validateParams.errors)
|
||||
}
|
||||
return method(params)
|
||||
const result = await method(params)
|
||||
if (!validateResult(result)) {
|
||||
warn('invalid API method result', { errors: validateResult.error, result })
|
||||
}
|
||||
return result
|
||||
}
|
||||
m.description = description
|
||||
m.params = params
|
||||
m.result = resultSchema
|
||||
|
||||
methods[name] = m
|
||||
|
||||
@@ -289,4 +297,43 @@ export default class Api {
|
||||
}
|
||||
return fn(params)
|
||||
}
|
||||
|
||||
#compileSchema(schema) {
|
||||
if (schema === undefined) {
|
||||
return stubTrue
|
||||
}
|
||||
|
||||
if (schema.type === undefined) {
|
||||
schema = { type: 'object', properties: schema }
|
||||
}
|
||||
|
||||
const { type } = schema
|
||||
if (Array.isArray(type) ? type.includes('object') : type === 'object') {
|
||||
const { properties = {} } = schema
|
||||
|
||||
if (schema.additionalProperties === undefined) {
|
||||
const wildCard = properties['*']
|
||||
if (wildCard === undefined) {
|
||||
// we want additional properties to be disabled by default
|
||||
schema.additionalProperties = false
|
||||
} else {
|
||||
delete properties['*']
|
||||
schema.additionalProperties = wildCard
|
||||
}
|
||||
}
|
||||
|
||||
// we want properties to be required by default unless explicitly marked so
|
||||
// we use property `optional` instead of object `required`
|
||||
if (schema.required === undefined) {
|
||||
schema.required = Object.keys(properties).filter(name => {
|
||||
const param = properties[name]
|
||||
const required = !param.optional
|
||||
delete param.optional
|
||||
return required
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return this._ajv.compile(schema)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -52,7 +52,7 @@ export default class Authentication {
|
||||
}
|
||||
|
||||
async findProfile(credentials) {
|
||||
if (credentials?.authenticationToken === this.#token) {
|
||||
if (credentials?.token === this.#token) {
|
||||
return new Profile()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,27 +22,6 @@ disableMergeWorker = false
|
||||
snapshotNameLabelTpl = '[XO Backup {job.name}] {vm.name_label}'
|
||||
vhdDirectoryCompression = 'brotli'
|
||||
|
||||
[backups.defaultSettings]
|
||||
reportWhen = 'failure'
|
||||
|
||||
[backups.metadata.defaultSettings]
|
||||
retentionPoolMetadata = 0
|
||||
retentionXoMetadata = 0
|
||||
|
||||
[backups.vm.defaultSettings]
|
||||
bypassVdiChainsCheck = false
|
||||
checkpointSnapshot = false
|
||||
concurrency = 2
|
||||
copyRetention = 0
|
||||
deleteFirst = false
|
||||
exportRetention = 0
|
||||
fullInterval = 0
|
||||
offlineBackup = false
|
||||
offlineSnapshot = false
|
||||
snapshotRetention = 0
|
||||
timeout = 0
|
||||
vmTimeout = 0
|
||||
|
||||
# This is a work-around.
|
||||
#
|
||||
# See https://github.com/vatesfr/xen-orchestra/pull/4674
|
||||
@@ -81,11 +60,6 @@ timeout = 600e3
|
||||
disableFileRemotes = true
|
||||
|
||||
[xapiOptions]
|
||||
# VDIs with `[NOBAK]` flag can be ignored while snapshotting an halted VM.
|
||||
#
|
||||
# This is disabled by default for the time being but will be turned on after enough testing.
|
||||
ignoreNobakVdis = false
|
||||
|
||||
maxUncoalescedVdis = 1
|
||||
watchEvents = ['network', 'PIF', 'pool', 'SR', 'task', 'VBD', 'VDI', 'VIF', 'VM']
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import getopts from 'getopts'
|
||||
import pRetry from 'promise-toolbox/retry'
|
||||
import { catchGlobalErrors } from '@xen-orchestra/log/configure.js'
|
||||
import { create as createServer } from 'http-server-plus'
|
||||
import { createCachedLookup } from '@vates/cached-dns.lookup'
|
||||
import { createLogger } from '@xen-orchestra/log'
|
||||
import { createSecureServer } from 'http2'
|
||||
import { genSelfSignedCert } from '@xen-orchestra/self-signed'
|
||||
@@ -15,6 +16,8 @@ import { load as loadConfig } from 'app-conf'
|
||||
|
||||
catchGlobalErrors(createLogger('xo:proxy'))
|
||||
|
||||
createCachedLookup().patchGlobal()
|
||||
|
||||
const { fatal, info, warn } = createLogger('xo:proxy:bootstrap')
|
||||
|
||||
const APP_DIR = new URL('.', import.meta.url).pathname
|
||||
@@ -53,11 +56,32 @@ ${APP_NAME} v${APP_VERSION}
|
||||
createSecureServer: opts => createSecureServer({ ...opts, allowHTTP1: true }),
|
||||
})
|
||||
|
||||
forOwn(config.http.listen, async ({ autoCert, cert, key, ...opts }) => {
|
||||
forOwn(config.http.listen, async ({ autoCert, cert, key, ...opts }, configKey) => {
|
||||
const useAcme = autoCert && opts.acmeDomain !== undefined
|
||||
|
||||
// don't pass these entries to httpServer.listen(opts)
|
||||
for (const key of Object.keys(opts).filter(_ => _.startsWith('acme'))) {
|
||||
delete opts[key]
|
||||
}
|
||||
|
||||
try {
|
||||
const niceAddress = await pRetry(
|
||||
async () => {
|
||||
if (cert !== undefined && key !== undefined) {
|
||||
let niceAddress
|
||||
if (cert !== undefined && key !== undefined) {
|
||||
if (useAcme) {
|
||||
opts.SNICallback = async (serverName, callback) => {
|
||||
try {
|
||||
// injected by mixins/SslCertificate
|
||||
const secureContext = await httpServer.getSecureContext(serverName, configKey, opts.cert, opts.key)
|
||||
callback(null, secureContext)
|
||||
} catch (error) {
|
||||
warn(error)
|
||||
callback(error, null)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
niceAddress = await pRetry(
|
||||
async () => {
|
||||
try {
|
||||
opts.cert = fse.readFileSync(cert)
|
||||
opts.key = fse.readFileSync(key)
|
||||
@@ -73,20 +97,22 @@ ${APP_NAME} v${APP_VERSION}
|
||||
opts.cert = pems.cert
|
||||
opts.key = pems.key
|
||||
}
|
||||
}
|
||||
|
||||
return httpServer.listen(opts)
|
||||
},
|
||||
{
|
||||
tries: 2,
|
||||
when: e => autoCert && e.code === 'ERR_SSL_EE_KEY_TOO_SMALL',
|
||||
onRetry: () => {
|
||||
warn('deleting invalid certificate')
|
||||
fse.unlinkSync(cert)
|
||||
fse.unlinkSync(key)
|
||||
return httpServer.listen(opts)
|
||||
},
|
||||
}
|
||||
)
|
||||
{
|
||||
tries: 2,
|
||||
when: e => autoCert && e.code === 'ERR_SSL_EE_KEY_TOO_SMALL',
|
||||
onRetry: () => {
|
||||
warn('deleting invalid certificate')
|
||||
fse.unlinkSync(cert)
|
||||
fse.unlinkSync(key)
|
||||
},
|
||||
}
|
||||
)
|
||||
} else {
|
||||
niceAddress = await httpServer.listen(opts)
|
||||
}
|
||||
|
||||
info(`Web server listening on ${niceAddress}`)
|
||||
} catch (error) {
|
||||
@@ -143,6 +169,7 @@ ${APP_NAME} v${APP_VERSION}
|
||||
process.on(signal, () => {
|
||||
if (alreadyCalled) {
|
||||
warn('forced exit')
|
||||
// eslint-disable-next-line n/no-process-exit
|
||||
process.exit(1)
|
||||
}
|
||||
alreadyCalled = true
|
||||
@@ -161,6 +188,7 @@ main(process.argv.slice(2)).then(
|
||||
error => {
|
||||
fatal(error)
|
||||
|
||||
// eslint-disable-next-line n/no-process-exit
|
||||
process.exit(1)
|
||||
}
|
||||
)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"private": true,
|
||||
"name": "@xen-orchestra/proxy",
|
||||
"version": "0.20.1",
|
||||
"version": "0.26.0",
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"description": "XO Proxy used to remotely execute backup jobs",
|
||||
"keywords": [
|
||||
@@ -26,26 +26,27 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@iarna/toml": "^2.2.0",
|
||||
"@koa/router": "^10.0.0",
|
||||
"@koa/router": "^12.0.0",
|
||||
"@vates/cached-dns.lookup": "^1.0.0",
|
||||
"@vates/compose": "^2.1.0",
|
||||
"@vates/decorate-with": "^2.0.0",
|
||||
"@vates/disposable": "^0.1.1",
|
||||
"@xen-orchestra/async-map": "^0.1.2",
|
||||
"@xen-orchestra/backups": "^0.21.0",
|
||||
"@xen-orchestra/fs": "^1.0.0",
|
||||
"@xen-orchestra/backups": "^0.27.4",
|
||||
"@xen-orchestra/fs": "^3.0.0",
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"@xen-orchestra/mixin": "^0.1.0",
|
||||
"@xen-orchestra/mixins": "^0.2.0",
|
||||
"@xen-orchestra/self-signed": "^0.1.0",
|
||||
"@xen-orchestra/xapi": "^0.10.0",
|
||||
"@xen-orchestra/mixins": "^0.7.1",
|
||||
"@xen-orchestra/self-signed": "^0.1.3",
|
||||
"@xen-orchestra/xapi": "^1.4.2",
|
||||
"ajv": "^8.0.3",
|
||||
"app-conf": "^2.0.0",
|
||||
"app-conf": "^2.1.0",
|
||||
"async-iterator-to-stream": "^1.1.0",
|
||||
"fs-extra": "^10.0.0",
|
||||
"get-stream": "^6.0.0",
|
||||
"getopts": "^2.2.3",
|
||||
"golike-defer": "^0.5.1",
|
||||
"http-server-plus": "^0.11.0",
|
||||
"http-server-plus": "^0.11.1",
|
||||
"http2-proxy": "^5.0.53",
|
||||
"json-rpc-protocol": "^0.13.1",
|
||||
"jsonrpc-websocket-client": "^0.7.2",
|
||||
@@ -59,7 +60,7 @@
|
||||
"source-map-support": "^0.5.16",
|
||||
"stoppable": "^1.0.6",
|
||||
"xdg-basedir": "^5.1.0",
|
||||
"xen-api": "^1.1.0",
|
||||
"xen-api": "^1.2.2",
|
||||
"xo-common": "^0.8.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
@@ -2,22 +2,23 @@
|
||||
|
||||
const { execFile } = require('child_process')
|
||||
|
||||
const openssl = (cmd, args, { input, ...opts } = {}) =>
|
||||
const RE =
|
||||
/^(-----BEGIN PRIVATE KEY-----.+-----END PRIVATE KEY-----\n)(-----BEGIN CERTIFICATE-----.+-----END CERTIFICATE-----\n)$/s
|
||||
exports.genSelfSignedCert = async ({ days = 360 } = {}) =>
|
||||
new Promise((resolve, reject) => {
|
||||
const child = execFile('openssl', [cmd, ...args], opts, (error, stdout) =>
|
||||
error != null ? reject(error) : resolve(stdout)
|
||||
execFile(
|
||||
'openssl',
|
||||
['req', '-batch', '-new', '-x509', '-days', String(days), '-nodes', '-newkey', 'rsa:2048', '-keyout', '-'],
|
||||
(error, stdout) => {
|
||||
if (error != null) {
|
||||
return reject(error)
|
||||
}
|
||||
const matches = RE.exec(stdout)
|
||||
if (matches === null) {
|
||||
return reject(new Error('stdout does not match regular expression'))
|
||||
}
|
||||
const [, key, cert] = matches
|
||||
resolve({ cert, key })
|
||||
}
|
||||
)
|
||||
if (input !== undefined) {
|
||||
child.stdin.end(input)
|
||||
}
|
||||
})
|
||||
|
||||
exports.genSelfSignedCert = async ({ days = 360 } = {}) => {
|
||||
const key = await openssl('genrsa', ['2048'])
|
||||
return {
|
||||
cert: await openssl('req', ['-batch', '-new', '-key', '-', '-x509', '-days', String(days), '-nodes'], {
|
||||
input: key,
|
||||
}),
|
||||
key,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"version": "0.1.0",
|
||||
"version": "0.1.3",
|
||||
"engines": {
|
||||
"node": ">=8.10"
|
||||
},
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
'use strict'
|
||||
|
||||
module.exports = require('../../@xen-orchestra/babel-config')(require('./package.json'))
|
||||
@@ -1 +0,0 @@
|
||||
../../scripts/babel-eslintrc.js
|
||||
@@ -1,8 +1,10 @@
|
||||
import escapeRegExp from 'lodash/escapeRegExp'
|
||||
'use strict'
|
||||
|
||||
const escapeRegExp = require('lodash/escapeRegExp')
|
||||
|
||||
const compareLengthDesc = (a, b) => b.length - a.length
|
||||
|
||||
export function compileTemplate(pattern, rules) {
|
||||
exports.compileTemplate = function compileTemplate(pattern, rules) {
|
||||
const matches = Object.keys(rules).sort(compareLengthDesc).map(escapeRegExp).join('|')
|
||||
const regExp = new RegExp(`\\\\(?:\\\\|${matches})|${matches}`, 'g')
|
||||
return (...params) =>
|
||||
@@ -1,5 +1,8 @@
|
||||
/* eslint-env jest */
|
||||
import { compileTemplate } from '.'
|
||||
|
||||
'use strict'
|
||||
|
||||
const { compileTemplate } = require('.')
|
||||
|
||||
it("correctly replaces the template's variables", () => {
|
||||
const replacer = compileTemplate('{property}_\\{property}_\\\\{property}_{constant}_%_FOO', {
|
||||
@@ -14,31 +14,13 @@
|
||||
"name": "Vates SAS",
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"preferGlobal": false,
|
||||
"main": "dist/",
|
||||
"browserslist": [
|
||||
">2%"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/cli": "^7.0.0",
|
||||
"@babel/core": "^7.0.0",
|
||||
"@babel/preset-env": "^7.0.0",
|
||||
"cross-env": "^7.0.2",
|
||||
"rimraf": "^3.0.0"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "cross-env NODE_ENV=production babel --source-maps --out-dir=dist/ src/",
|
||||
"clean": "rimraf dist/",
|
||||
"dev": "cross-env NODE_ENV=development babel --watch --source-maps --out-dir=dist/ src/",
|
||||
"prebuild": "yarn run clean",
|
||||
"predev": "yarn run prebuild",
|
||||
"prepublishOnly": "yarn run build",
|
||||
"postversion": "npm publish --access public"
|
||||
},
|
||||
"dependencies": {
|
||||
"lodash": "^4.17.15"
|
||||
},
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@xen-orchestra/upload-ova",
|
||||
"version": "0.1.4",
|
||||
"version": "0.1.5",
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"description": "Basic CLI to upload ova files to Xen-Orchestra",
|
||||
"keywords": [
|
||||
@@ -43,7 +43,7 @@
|
||||
"pw": "^0.0.4",
|
||||
"xdg-basedir": "^4.0.0",
|
||||
"xo-lib": "^0.11.1",
|
||||
"xo-vmdk-to-vhd": "^2.2.0"
|
||||
"xo-vmdk-to-vhd": "^2.4.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/cli": "^7.0.0",
|
||||
|
||||
1
@xen-orchestra/xapi-typegen/.npmignore
Symbolic link
1
@xen-orchestra/xapi-typegen/.npmignore
Symbolic link
@@ -0,0 +1 @@
|
||||
../../scripts/npmignore
|
||||
90
@xen-orchestra/xapi-typegen/_genTs.mjs
Normal file
90
@xen-orchestra/xapi-typegen/_genTs.mjs
Normal file
@@ -0,0 +1,90 @@
|
||||
let indentLevel = 0
|
||||
|
||||
function indent() {
|
||||
return ' '.repeat(indentLevel)
|
||||
}
|
||||
|
||||
function quoteId(name) {
|
||||
return /^[a-z0-9_]+$/i.test(name) ? name : JSON.stringify(name)
|
||||
}
|
||||
|
||||
function genType(type, schema) {
|
||||
if (type === 'array' && schema.items !== undefined) {
|
||||
const { items } = schema
|
||||
if (Array.isArray(items)) {
|
||||
if (items.length !== 0) {
|
||||
return ['[' + items.map(genTs).join(', ') + ']']
|
||||
}
|
||||
} else {
|
||||
const { type } = items
|
||||
if (type !== undefined && type.length !== 0) {
|
||||
return genTs(items, true) + '[]'
|
||||
} else {
|
||||
return 'unknown[]'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (type !== 'object') {
|
||||
return type
|
||||
}
|
||||
|
||||
const code = []
|
||||
|
||||
const { title } = schema
|
||||
const isInterface = title !== undefined
|
||||
if (isInterface) {
|
||||
code.push('interface ', title, ' ')
|
||||
}
|
||||
const fieldDelimiter = (isInterface ? ';' : ',') + '\n'
|
||||
|
||||
const { additionalProperties, properties } = schema
|
||||
const hasAdditionalProperties = additionalProperties?.type !== undefined
|
||||
const propertiesKeys = Object.keys(properties ?? {})
|
||||
|
||||
if (!hasAdditionalProperties && propertiesKeys.length === 0) {
|
||||
code.push('{}')
|
||||
return code.join('')
|
||||
}
|
||||
|
||||
code.push('{\n')
|
||||
++indentLevel
|
||||
|
||||
for (const name of propertiesKeys.sort()) {
|
||||
const schema = properties[name]
|
||||
code.push(indent(), quoteId(name))
|
||||
if (schema.optional) {
|
||||
code.push('?')
|
||||
}
|
||||
code.push(': ')
|
||||
|
||||
code.push(genTs(schema))
|
||||
|
||||
code.push(fieldDelimiter)
|
||||
}
|
||||
|
||||
if (hasAdditionalProperties) {
|
||||
code.push(indent(), '[key: string]: ', genTs(additionalProperties), fieldDelimiter)
|
||||
}
|
||||
|
||||
--indentLevel
|
||||
code.push(indent(), '}')
|
||||
|
||||
return code.join('')
|
||||
}
|
||||
|
||||
export function genTs(schema, groupMultiple = false) {
|
||||
let { type } = schema
|
||||
if (Array.isArray(type)) {
|
||||
if (type.length !== 1) {
|
||||
const code = type
|
||||
.sort()
|
||||
.map(type => genType(type, schema))
|
||||
.join(' | ')
|
||||
|
||||
return groupMultiple ? '(' + code + ')' : code
|
||||
}
|
||||
type = type[0]
|
||||
}
|
||||
return genType(type, schema)
|
||||
}
|
||||
110
@xen-orchestra/xapi-typegen/_updateSchema.mjs
Normal file
110
@xen-orchestra/xapi-typegen/_updateSchema.mjs
Normal file
@@ -0,0 +1,110 @@
|
||||
const JSON_TYPES = {
|
||||
__proto__: null,
|
||||
|
||||
array: true,
|
||||
boolean: true,
|
||||
null: true,
|
||||
number: true,
|
||||
object: true,
|
||||
string: true,
|
||||
}
|
||||
|
||||
function addType(schema, type) {
|
||||
const previous = schema.type
|
||||
if (previous === undefined) {
|
||||
schema.type = type
|
||||
} else if (Array.isArray(previous)) {
|
||||
if (previous.indexOf(type) === -1) {
|
||||
previous.push(type)
|
||||
}
|
||||
} else if (previous !== type) {
|
||||
schema.type = [previous, type]
|
||||
}
|
||||
}
|
||||
|
||||
function getType(value) {
|
||||
let type = typeof value
|
||||
if (type === 'object') {
|
||||
if (value === null) {
|
||||
type = 'null'
|
||||
} else if (Array.isArray(value)) {
|
||||
type = 'array'
|
||||
}
|
||||
}
|
||||
|
||||
if (type in JSON_TYPES) {
|
||||
return type
|
||||
}
|
||||
throw new TypeError('unsupported type: ' + type)
|
||||
}
|
||||
|
||||
// like Math.max but v1 can be undefined
|
||||
const max = (v1, v2) => (v1 > v2 ? v1 : v2)
|
||||
|
||||
// like Math.min but v1 can be undefined
|
||||
const min = (v1, v2) => (v1 < v2 ? v1 : v2)
|
||||
|
||||
function updateSchema_(path, value, schema = { __proto__: null }, getOption) {
|
||||
if (value === undefined) {
|
||||
schema.optional = true
|
||||
} else {
|
||||
const type = getType(value)
|
||||
addType(schema, type)
|
||||
|
||||
if (type === 'array') {
|
||||
const items = schema.items ?? (schema.items = { __proto__: null })
|
||||
const pathLength = path.length
|
||||
if (Array.isArray(items)) {
|
||||
for (let i = 0, n = value.length; i < n; ++i) {
|
||||
path[pathLength] = i
|
||||
items[i] = updateSchema_(path, value[i], items[i], getOption)
|
||||
}
|
||||
} else {
|
||||
for (let i = 0, n = value.length; i < n; ++i) {
|
||||
path[pathLength] = i
|
||||
updateSchema_(path, value[i], items, getOption)
|
||||
}
|
||||
}
|
||||
path.length = pathLength
|
||||
} else if (type === 'number') {
|
||||
if (getOption('computeMinimum', path)) {
|
||||
schema.minimum = min(schema.minimum, value)
|
||||
}
|
||||
if (getOption('computeMaximum', path)) {
|
||||
schema.maximum = max(schema.maximum, value)
|
||||
}
|
||||
} else if (type === 'object') {
|
||||
const pathLength = path.length
|
||||
const { additionalProperties } = schema
|
||||
if (typeof additionalProperties === 'object') {
|
||||
for (const key of Object.keys(value)) {
|
||||
path[pathLength] = key
|
||||
updateSchema_(path, value[key], additionalProperties, getOption)
|
||||
}
|
||||
} else {
|
||||
const properties = schema.properties ?? (schema.properties = { __proto__: null })
|
||||
|
||||
// handle missing properties
|
||||
for (const key of Object.keys(properties)) {
|
||||
if (!Object.hasOwn(value, key)) {
|
||||
properties[key].optional = true
|
||||
}
|
||||
}
|
||||
|
||||
// handle existing properties
|
||||
for (const key of Object.keys(value)) {
|
||||
path[pathLength] = key
|
||||
properties[key] = updateSchema_(path, value[key], properties[key], getOption)
|
||||
}
|
||||
}
|
||||
path.length = pathLength
|
||||
}
|
||||
}
|
||||
|
||||
return schema
|
||||
}
|
||||
|
||||
export function updateSchema(value, schema, options) {
|
||||
const getOption = options == null ? Function.prototype : typeof options === 'object' ? opt => options[opt] : options
|
||||
return updateSchema_([], value, schema, getOption)
|
||||
}
|
||||
65
@xen-orchestra/xapi-typegen/cli.mjs
Normal file
65
@xen-orchestra/xapi-typegen/cli.mjs
Normal file
@@ -0,0 +1,65 @@
|
||||
import { readFileSync } from 'fs'
|
||||
|
||||
import { genTs } from './_genTs.mjs'
|
||||
import { updateSchema } from './_updateSchema.mjs'
|
||||
|
||||
const upperCamelCase = s =>
|
||||
s
|
||||
.split(/[^a-zA-Z]+/)
|
||||
.map(s => s[0].toUpperCase() + s.slice(1).toLocaleLowerCase())
|
||||
.join('')
|
||||
|
||||
const objects = JSON.parse(readFileSync('./objects.json'))
|
||||
for (const type of Object.keys(objects).sort()) {
|
||||
const schema = {
|
||||
__proto__: null,
|
||||
|
||||
title: upperCamelCase(type),
|
||||
type: 'object',
|
||||
properties: {
|
||||
assigned_ips: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
bios_strings: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
features: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
license_params: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
networks: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
other_config: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
other: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
restrictions: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
sm_config: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
xenstore_data: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
VCPUs_utilisation: {
|
||||
additionalProperties: {},
|
||||
},
|
||||
},
|
||||
}
|
||||
for (const object of Object.values(objects[type])) {
|
||||
updateSchema(object, schema)
|
||||
}
|
||||
for (const name of Object.keys(schema.properties)) {
|
||||
if (schema.properties[name].type === undefined) {
|
||||
delete schema.properties[name]
|
||||
}
|
||||
}
|
||||
|
||||
console.log(genTs(schema))
|
||||
}
|
||||
20
@xen-orchestra/xapi-typegen/package.json
Normal file
20
@xen-orchestra/xapi-typegen/package.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"private": true,
|
||||
"name": "@xen-orchestra/xapi-typegen",
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/xapi-typegen",
|
||||
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
|
||||
"repository": {
|
||||
"directory": "@xen-orchestra/xapi-typegen",
|
||||
"type": "git",
|
||||
"url": "https://github.com/vatesfr/xen-orchestra.git"
|
||||
},
|
||||
"author": {
|
||||
"name": "Vates SAS",
|
||||
"url": "https://vates.fr"
|
||||
},
|
||||
"license": "AGPL-3.0-or-later",
|
||||
"version": "0.0.0",
|
||||
"engines": {
|
||||
"node": ">=16.9"
|
||||
}
|
||||
}
|
||||
9
@xen-orchestra/xapi/_AggregateError.js
Normal file
9
@xen-orchestra/xapi/_AggregateError.js
Normal file
@@ -0,0 +1,9 @@
|
||||
'use strict'
|
||||
|
||||
// TODO: remove when Node >=15.0
|
||||
module.exports = class AggregateError extends Error {
|
||||
constructor(errors, message) {
|
||||
super(message)
|
||||
this.errors = errors
|
||||
}
|
||||
}
|
||||
@@ -4,5 +4,5 @@
|
||||
|
||||
const { Xapi } = require('./')
|
||||
require('xen-api/dist/cli.js')
|
||||
.default(opts => new Xapi({ ignoreNobakVdis: true, ...opts }))
|
||||
.default(opts => new Xapi(opts))
|
||||
.catch(console.error.bind(console, 'FATAL'))
|
||||
|
||||
@@ -101,20 +101,16 @@ function removeWatcher(predicate, cb) {
|
||||
class Xapi extends Base {
|
||||
constructor({
|
||||
callRetryWhenTooManyPendingTasks = { delay: 5e3, tries: 10 },
|
||||
ignoreNobakVdis,
|
||||
maxUncoalescedVdis,
|
||||
vdiDestroyRetryWhenInUse = { delay: 5e3, tries: 10 },
|
||||
...opts
|
||||
}) {
|
||||
assert.notStrictEqual(ignoreNobakVdis, undefined)
|
||||
|
||||
super(opts)
|
||||
this._callRetryWhenTooManyPendingTasks = {
|
||||
...callRetryWhenTooManyPendingTasks,
|
||||
onRetry,
|
||||
when: { code: 'TOO_MANY_PENDING_TASKS' },
|
||||
}
|
||||
this._ignoreNobakVdis = ignoreNobakVdis
|
||||
this._maxUncoalescedVdis = maxUncoalescedVdis
|
||||
this._vdiDestroyRetryWhenInUse = {
|
||||
...vdiDestroyRetryWhenInUse,
|
||||
@@ -191,7 +187,30 @@ class Xapi extends Base {
|
||||
}
|
||||
return removeWatcher.bind(watchers, predicate, cb)
|
||||
}
|
||||
|
||||
// wait for an object to be in a specified state
|
||||
|
||||
waitObjectState(refOrUuid, predicate, { timeout } = {}) {
|
||||
return new Promise((resolve, reject) => {
|
||||
let timeoutHandle
|
||||
const stop = this.watchObject(refOrUuid, object => {
|
||||
if (predicate(object)) {
|
||||
clearTimeout(timeoutHandle)
|
||||
stop()
|
||||
resolve(object)
|
||||
}
|
||||
})
|
||||
|
||||
if (timeout !== undefined) {
|
||||
timeoutHandle = setTimeout(() => {
|
||||
stop()
|
||||
reject(new Error(`waitObjectState: timeout reached before ${refOrUuid} in expected state`))
|
||||
}, timeout)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
function mixin(mixins) {
|
||||
const xapiProto = Xapi.prototype
|
||||
const { defineProperties, getOwnPropertyDescriptor, getOwnPropertyNames } = Object
|
||||
@@ -211,8 +230,9 @@ function mixin(mixins) {
|
||||
defineProperties(xapiProto, descriptors)
|
||||
}
|
||||
mixin({
|
||||
task: require('./task.js'),
|
||||
host: require('./host.js'),
|
||||
SR: require('./sr.js'),
|
||||
task: require('./task.js'),
|
||||
VBD: require('./vbd.js'),
|
||||
VDI: require('./vdi.js'),
|
||||
VIF: require('./vif.js'),
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@xen-orchestra/xapi",
|
||||
"version": "0.10.0",
|
||||
"version": "1.4.2",
|
||||
"homepage": "https://github.com/vatesfr/xen-orchestra/tree/master/@xen-orchestra/xapi",
|
||||
"bugs": "https://github.com/vatesfr/xen-orchestra/issues",
|
||||
"repository": {
|
||||
@@ -15,7 +15,7 @@
|
||||
"node": ">=14"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"xen-api": "^1.1.0"
|
||||
"xen-api": "^1.2.2"
|
||||
},
|
||||
"scripts": {
|
||||
"postversion": "npm publish --access public"
|
||||
@@ -26,8 +26,10 @@
|
||||
"@xen-orchestra/log": "^0.3.0",
|
||||
"d3-time-format": "^3.0.0",
|
||||
"golike-defer": "^0.5.1",
|
||||
"json-rpc-protocol": "^0.13.2",
|
||||
"lodash": "^4.17.15",
|
||||
"promise-toolbox": "^0.21.0",
|
||||
"vhd-lib": "^4.0.0",
|
||||
"xo-common": "^0.8.0"
|
||||
},
|
||||
"private": false,
|
||||
|
||||
179
@xen-orchestra/xapi/sr.js
Normal file
179
@xen-orchestra/xapi/sr.js
Normal file
@@ -0,0 +1,179 @@
|
||||
'use strict'
|
||||
|
||||
const { asyncMap, asyncMapSettled } = require('@xen-orchestra/async-map')
|
||||
const { decorateClass } = require('@vates/decorate-with')
|
||||
const { defer } = require('golike-defer')
|
||||
const { incorrectState } = require('xo-common/api-errors')
|
||||
const { VDI_FORMAT_VHD } = require('./index.js')
|
||||
const assert = require('node:assert').strict
|
||||
const peekFooterFromStream = require('vhd-lib/peekFooterFromVhdStream')
|
||||
|
||||
const AggregateError = require('./_AggregateError.js')
|
||||
|
||||
const { warn } = require('@xen-orchestra/log').createLogger('xo:xapi:sr')
|
||||
|
||||
const OC_MAINTENANCE = 'xo:maintenanceState'
|
||||
|
||||
class Sr {
|
||||
async create({
|
||||
content_type = 'user', // recommended by Citrix
|
||||
device_config,
|
||||
host,
|
||||
name_description = '',
|
||||
name_label,
|
||||
physical_size = 0,
|
||||
shared,
|
||||
sm_config = {},
|
||||
type,
|
||||
}) {
|
||||
const ref = await this.call(
|
||||
'SR.create',
|
||||
host,
|
||||
device_config,
|
||||
physical_size,
|
||||
name_label,
|
||||
name_description,
|
||||
type,
|
||||
content_type,
|
||||
shared,
|
||||
sm_config
|
||||
)
|
||||
|
||||
// https://developer-docs.citrix.com/projects/citrix-hypervisor-sdk/en/latest/xc-api-extensions/#sr
|
||||
this.setFieldEntry('SR', ref, 'other_config', 'auto-scan', 'true').catch(warn)
|
||||
|
||||
return ref
|
||||
}
|
||||
|
||||
// Switch the SR to maintenance mode:
|
||||
// - shutdown all running VMs with a VDI on this SR
|
||||
// - their UUID is saved into SR.other_config[OC_MAINTENANCE].shutdownVms
|
||||
// - clean shutdown is attempted, and falls back to a hard shutdown
|
||||
// - unplug all connected hosts from this SR
|
||||
async enableMaintenanceMode($defer, ref, { vmsToShutdown = [] } = {}) {
|
||||
const state = { timestamp: Date.now() }
|
||||
|
||||
// will throw if already in maintenance mode
|
||||
await this.call('SR.add_to_other_config', ref, OC_MAINTENANCE, JSON.stringify(state))
|
||||
|
||||
await $defer.onFailure.call(this, 'call', 'SR.remove_from_other_config', ref, OC_MAINTENANCE)
|
||||
|
||||
const runningVms = new Map()
|
||||
const handleVbd = async ref => {
|
||||
const vmRef = await this.getField('VBD', ref, 'VM')
|
||||
if (!runningVms.has(vmRef)) {
|
||||
const power_state = await this.getField('VM', vmRef, 'power_state')
|
||||
const isPaused = power_state === 'Paused'
|
||||
if (isPaused || power_state === 'Running') {
|
||||
runningVms.set(vmRef, isPaused)
|
||||
}
|
||||
}
|
||||
}
|
||||
await asyncMap(await this.getField('SR', ref, 'VDIs'), async ref => {
|
||||
await asyncMap(await this.getField('VDI', ref, 'VBDs'), handleVbd)
|
||||
})
|
||||
|
||||
{
|
||||
const runningVmUuids = await asyncMap(runningVms.keys(), ref => this.getField('VM', ref, 'uuid'))
|
||||
|
||||
const set = new Set(vmsToShutdown)
|
||||
for (const vmUuid of runningVmUuids) {
|
||||
if (!set.has(vmUuid)) {
|
||||
throw incorrectState({
|
||||
actual: vmsToShutdown,
|
||||
expected: runningVmUuids,
|
||||
property: 'vmsToShutdown',
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
state.shutdownVms = {}
|
||||
|
||||
await asyncMapSettled(runningVms, async ([ref, isPaused]) => {
|
||||
state.shutdownVms[await this.getField('VM', ref, 'uuid')] = isPaused
|
||||
|
||||
try {
|
||||
await this.callAsync('VM.clean_shutdown', ref)
|
||||
} catch (error) {
|
||||
warn('SR_enableMaintenanceMode, VM clean shutdown', { error })
|
||||
await this.callAsync('VM.hard_shutdown', ref)
|
||||
}
|
||||
|
||||
$defer.onFailure.call(this, 'callAsync', 'VM.start', ref, isPaused, true)
|
||||
})
|
||||
|
||||
state.unpluggedPbds = []
|
||||
await asyncMapSettled(await this.getField('SR', ref, 'PBDs'), async ref => {
|
||||
if (await this.getField('PBD', ref, 'currently_attached')) {
|
||||
state.unpluggedPbds.push(await this.getField('PBD', ref, 'uuid'))
|
||||
|
||||
await this.callAsync('PBD.unplug', ref)
|
||||
|
||||
$defer.onFailure.call(this, 'callAsync', 'PBD.plug', ref)
|
||||
}
|
||||
})
|
||||
|
||||
await this.setFieldEntry('SR', ref, 'other_config', OC_MAINTENANCE, JSON.stringify(state))
|
||||
}
|
||||
|
||||
// this method is best effort and will not stop on first error
|
||||
async disableMaintenanceMode(ref) {
|
||||
const state = JSON.parse((await this.getField('SR', ref, 'other_config'))[OC_MAINTENANCE])
|
||||
|
||||
// will throw if not in maintenance mode
|
||||
await this.call('SR.remove_from_other_config', ref, OC_MAINTENANCE)
|
||||
|
||||
const errors = []
|
||||
|
||||
await asyncMap(state.unpluggedPbds, async uuid => {
|
||||
try {
|
||||
await this.callAsync('PBD.plug', await this.call('PBD.get_by_uuid', uuid))
|
||||
} catch (error) {
|
||||
errors.push(error)
|
||||
}
|
||||
})
|
||||
|
||||
await asyncMap(Object.entries(state.shutdownVms), async ([uuid, isPaused]) => {
|
||||
try {
|
||||
await this.callAsync('VM.start', await this.call('VM.get_by_uuid', uuid), isPaused, true)
|
||||
} catch (error) {
|
||||
errors.push(error)
|
||||
}
|
||||
})
|
||||
|
||||
if (errors.length !== 0) {
|
||||
throw new AggregateError(errors)
|
||||
}
|
||||
}
|
||||
|
||||
async importVdi(
|
||||
$defer,
|
||||
ref,
|
||||
stream,
|
||||
{
|
||||
format = VDI_FORMAT_VHD,
|
||||
name_label = '[XO] Imported disk - ' + new Date().toISOString(),
|
||||
virtual_size,
|
||||
...vdiCreateOpts
|
||||
} = {}
|
||||
) {
|
||||
if (virtual_size === undefined) {
|
||||
if (format === VDI_FORMAT_VHD) {
|
||||
const footer = await peekFooterFromStream(stream)
|
||||
virtual_size = footer.currentSize
|
||||
} else {
|
||||
virtual_size = stream.length
|
||||
assert.notEqual(virtual_size, undefined)
|
||||
}
|
||||
}
|
||||
|
||||
const vdiRef = await this.VDI_create({ ...vdiCreateOpts, name_label, SR: ref, virtual_size })
|
||||
$defer.onFailure.call(this, 'callAsync', 'VDI.destroy', vdiRef)
|
||||
await this.VDI_importContent(vdiRef, stream, { format })
|
||||
return vdiRef
|
||||
}
|
||||
}
|
||||
module.exports = Sr
|
||||
|
||||
decorateClass(Sr, { enableMaintenanceMode: defer, importVdi: defer })
|
||||
@@ -6,6 +6,8 @@ const { Ref } = require('xen-api')
|
||||
|
||||
const isVmRunning = require('./_isVmRunning.js')
|
||||
|
||||
const { warn } = require('@xen-orchestra/log').createLogger('xo:xapi:vbd')
|
||||
|
||||
const noop = Function.prototype
|
||||
|
||||
module.exports = class Vbd {
|
||||
@@ -66,8 +68,10 @@ module.exports = class Vbd {
|
||||
})
|
||||
|
||||
if (isVmRunning(powerState)) {
|
||||
await this.callAsync('VBD.plug', vbdRef)
|
||||
this.callAsync('VBD.plug', vbdRef).catch(warn)
|
||||
}
|
||||
|
||||
return vbdRef
|
||||
}
|
||||
|
||||
async unplug(ref) {
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
'use strict'
|
||||
|
||||
const assert = require('node:assert').strict
|
||||
const CancelToken = require('promise-toolbox/CancelToken')
|
||||
const pCatch = require('promise-toolbox/catch')
|
||||
const pRetry = require('promise-toolbox/retry')
|
||||
@@ -30,8 +31,7 @@ class Vdi {
|
||||
other_config = {},
|
||||
read_only = false,
|
||||
sharable = false,
|
||||
sm_config,
|
||||
SR,
|
||||
SR = this.pool.default_SR,
|
||||
tags,
|
||||
type = 'user',
|
||||
virtual_size,
|
||||
@@ -39,10 +39,10 @@ class Vdi {
|
||||
},
|
||||
{
|
||||
// blindly copying `sm_config` from another VDI can create problems,
|
||||
// therefore it is ignored by default by this method
|
||||
// therefore it should be passed explicitly
|
||||
//
|
||||
// see https://github.com/vatesfr/xen-orchestra/issues/4482
|
||||
setSmConfig = false,
|
||||
sm_config,
|
||||
} = {}
|
||||
) {
|
||||
return this.call('VDI.create', {
|
||||
@@ -51,7 +51,7 @@ class Vdi {
|
||||
other_config,
|
||||
read_only,
|
||||
sharable,
|
||||
sm_config: setSmConfig ? sm_config : undefined,
|
||||
sm_config,
|
||||
SR,
|
||||
tags,
|
||||
type,
|
||||
@@ -87,6 +87,8 @@ class Vdi {
|
||||
}
|
||||
|
||||
async importContent(ref, stream, { cancelToken = CancelToken.none, format }) {
|
||||
assert.notEqual(format, undefined)
|
||||
|
||||
if (stream.length === undefined) {
|
||||
throw new Error('Trying to import a VDI without a length field. Please report this error to Xen Orchestra.')
|
||||
}
|
||||
|
||||
@@ -11,7 +11,8 @@ const { asyncMap } = require('@xen-orchestra/async-map')
|
||||
const { createLogger } = require('@xen-orchestra/log')
|
||||
const { decorateClass } = require('@vates/decorate-with')
|
||||
const { defer } = require('golike-defer')
|
||||
const { incorrectState } = require('xo-common/api-errors.js')
|
||||
const { incorrectState, forbiddenOperation } = require('xo-common/api-errors.js')
|
||||
const { JsonRpcError } = require('json-rpc-protocol')
|
||||
const { Ref } = require('xen-api')
|
||||
|
||||
const extractOpaqueRef = require('./_extractOpaqueRef.js')
|
||||
@@ -45,6 +46,21 @@ const cleanBiosStrings = biosStrings => {
|
||||
}
|
||||
}
|
||||
|
||||
async function listNobakVbds(xapi, vbdRefs) {
|
||||
const vbds = []
|
||||
await asyncMap(vbdRefs, async vbdRef => {
|
||||
const vbd = await xapi.getRecord('VBD', vbdRef)
|
||||
if (
|
||||
vbd.type === 'Disk' &&
|
||||
Ref.isNotEmpty(vbd.VDI) &&
|
||||
(await xapi.getField('VDI', vbd.VDI, 'name_label')).startsWith('[NOBAK]')
|
||||
) {
|
||||
vbds.push(vbd)
|
||||
}
|
||||
})
|
||||
return vbds
|
||||
}
|
||||
|
||||
async function safeGetRecord(xapi, type, ref) {
|
||||
try {
|
||||
return await xapi.getRecord(type, ref)
|
||||
@@ -129,9 +145,25 @@ class Vm {
|
||||
}
|
||||
}
|
||||
|
||||
async checkpoint(vmRef, { cancelToken = CancelToken.none, name_label } = {}) {
|
||||
async checkpoint($defer, vmRef, { cancelToken = CancelToken.none, ignoreNobakVdis = false, name_label } = {}) {
|
||||
const vm = await this.getRecord('VM', vmRef)
|
||||
|
||||
let destroyNobakVdis = false
|
||||
|
||||
if (ignoreNobakVdis) {
|
||||
if (vm.power_state === 'Halted') {
|
||||
await asyncMap(await listNobakVbds(this, vm.VBDs), async vbd => {
|
||||
await this.VBD_destroy(vbd.$ref)
|
||||
$defer.call(this, 'VBD_create', vbd)
|
||||
})
|
||||
} else {
|
||||
// cannot unplug VBDs on Running, Paused and Suspended VMs
|
||||
destroyNobakVdis = true
|
||||
}
|
||||
}
|
||||
|
||||
if (name_label === undefined) {
|
||||
name_label = await this.getField('VM', vmRef, 'name_label')
|
||||
name_label = vm.name_label
|
||||
}
|
||||
try {
|
||||
const ref = await this.callAsync(cancelToken, 'VM.checkpoint', vmRef, name_label).then(extractOpaqueRef)
|
||||
@@ -148,6 +180,12 @@ class Vm {
|
||||
noop
|
||||
)
|
||||
|
||||
if (destroyNobakVdis) {
|
||||
await asyncMap(await listNobakVbds(this, await this.getField('VM', ref, 'VBDs')), vbd =>
|
||||
this.VDI_destroy(vbd.VDI)
|
||||
)
|
||||
}
|
||||
|
||||
return ref
|
||||
} catch (error) {
|
||||
if (error.code === 'VM_BAD_POWER_STATE') {
|
||||
@@ -306,7 +344,13 @@ class Vm {
|
||||
const vm = await this.getRecord('VM', vmRef)
|
||||
|
||||
if (!bypassBlockedOperation && 'destroy' in vm.blocked_operations) {
|
||||
throw new Error('destroy is blocked')
|
||||
throw forbiddenOperation(
|
||||
`destroy is blocked: ${
|
||||
vm.blocked_operations.destroy === 'true'
|
||||
? 'protected from accidental deletion'
|
||||
: vm.blocked_operations.destroy
|
||||
}`
|
||||
)
|
||||
}
|
||||
|
||||
if (!forceDeleteDefaultTemplate && isDefaultTemplate(vm)) {
|
||||
@@ -466,6 +510,22 @@ class Vm {
|
||||
}
|
||||
return ref
|
||||
} catch (error) {
|
||||
if (
|
||||
// xxhash is the new form consistency hashing in CH 8.1 which uses a faster,
|
||||
// more efficient hashing algorithm to generate the consistency checks
|
||||
// in order to support larger files without the consistency checking process taking an incredibly long time
|
||||
error.code === 'IMPORT_ERROR' &&
|
||||
error.params?.some(
|
||||
param =>
|
||||
param.includes('INTERNAL_ERROR') &&
|
||||
param.includes('Expected to find an inline checksum') &&
|
||||
param.includes('.xxhash')
|
||||
)
|
||||
) {
|
||||
warn('import', { error })
|
||||
throw new JsonRpcError('Importing this VM requires XCP-ng or Citrix Hypervisor >=8.1')
|
||||
}
|
||||
|
||||
// augment the error with as much relevant info as possible
|
||||
const [poolMaster, sr] = await Promise.all([
|
||||
safeGetRecord(this, 'host', this.pool.master),
|
||||
@@ -477,21 +537,39 @@ class Vm {
|
||||
}
|
||||
}
|
||||
|
||||
async snapshot($defer, vmRef, { cancelToken = CancelToken.none, name_label } = {}) {
|
||||
async snapshot(
|
||||
$defer,
|
||||
vmRef,
|
||||
{ cancelToken = CancelToken.none, ignoreNobakVdis = false, name_label, unplugVusbs = false } = {}
|
||||
) {
|
||||
const vm = await this.getRecord('VM', vmRef)
|
||||
// cannot unplug VBDs on Running, Paused and Suspended VMs
|
||||
if (vm.power_state === 'Halted' && this._ignoreNobakVdis) {
|
||||
await asyncMap(vm.VBDs, async vbdRef => {
|
||||
const vbd = await this.getRecord('VBD', vbdRef)
|
||||
if (
|
||||
vbd.type === 'Disk' &&
|
||||
Ref.isNotEmpty(vbd.VDI) &&
|
||||
(await this.getField('VDI', vbd.VDI, 'name_label')).startsWith('[NOBAK]')
|
||||
) {
|
||||
await this.VBD_destroy(vbdRef)
|
||||
|
||||
const isHalted = vm.power_state === 'Halted'
|
||||
|
||||
// requires the VM to be halted because it's not possible to re-plug VUSB on a live VM
|
||||
if (unplugVusbs && isHalted) {
|
||||
// vm.VUSBs can be undefined (e.g. on XS 7.0.0)
|
||||
const vusbs = vm.VUSBs
|
||||
if (vusbs !== undefined) {
|
||||
await asyncMap(vusbs, async ref => {
|
||||
const vusb = await this.getRecord('VUSB', ref)
|
||||
await vusb.$call('destroy')
|
||||
$defer.call(this, 'call', 'VUSB.create', vusb.VM, vusb.USB_group, vusb.other_config)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
let destroyNobakVdis = false
|
||||
if (ignoreNobakVdis) {
|
||||
if (isHalted) {
|
||||
await asyncMap(await listNobakVbds(this, vm.VBDs), async vbd => {
|
||||
await this.VBD_destroy(vbd.$ref)
|
||||
$defer.call(this, 'VBD_create', vbd)
|
||||
}
|
||||
})
|
||||
})
|
||||
} else {
|
||||
// cannot unplug VBDs on Running, Paused and Suspended VMs
|
||||
destroyNobakVdis = true
|
||||
}
|
||||
}
|
||||
|
||||
if (name_label === undefined) {
|
||||
@@ -580,12 +658,19 @@ class Vm {
|
||||
noop
|
||||
)
|
||||
|
||||
if (destroyNobakVdis) {
|
||||
await asyncMap(await listNobakVbds(this, await this.getField('VM', ref, 'VBDs')), vbd =>
|
||||
this.VDI_destroy(vbd.VDI)
|
||||
)
|
||||
}
|
||||
|
||||
return ref
|
||||
}
|
||||
}
|
||||
module.exports = Vm
|
||||
|
||||
decorateClass(Vm, {
|
||||
checkpoint: defer,
|
||||
create: defer,
|
||||
export: defer,
|
||||
snapshot: defer,
|
||||
|
||||
300
CHANGELOG.md
300
CHANGELOG.md
@@ -1,11 +1,307 @@
|
||||
# ChangeLog
|
||||
|
||||
## **5.69.2** (2022-04-13)
|
||||
## **5.74.0** (2022-08-31)
|
||||
|
||||
<img id="latest" src="https://badgen.net/badge/channel/latest/yellow" alt="Channel: latest" />
|
||||
|
||||
### Enhancements
|
||||
|
||||
> Users must be able to say: “Nice enhancement, I'm eager to test it”
|
||||
|
||||
- [Home/Storage] Show which SRs are used for HA state files [#6339](https://github.com/vatesfr/xen-orchestra/issues/6339) (PR [#6384](https://github.com/vatesfr/xen-orchestra/pull/6384))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
> Users must be able to say: “I had this issue, happy to know it's fixed”
|
||||
|
||||
- [Backup/Restore] Fix backup list not loading on page load (PR [#6364](https://github.com/vatesfr/xen-orchestra/pull/6364))
|
||||
- [Host] Fix `should not contains property ["ignoreBackup"]` on some host operations (PR [#6362](https://github.com/vatesfr/xen-orchestra/pull/6362))
|
||||
|
||||
### Packages to release
|
||||
|
||||
- @xen-orchestra/fs 3.0.0
|
||||
- vhd-lib 4.0.0
|
||||
- @xen-orchestra/backups 0.27.4
|
||||
- @xen-orchestra/backups-cli 0.7.7
|
||||
- @xen-orchestra/xapi 1.4.2
|
||||
- xen-api 1.2.2
|
||||
- @xen-orchestra/proxy 0.26.0
|
||||
- vhd-cli 0.9.1
|
||||
- xo-vmdk-to-vhd 2.4.3
|
||||
- xo-server 5.101.0
|
||||
- xo-web 5.102.0
|
||||
|
||||
## **5.73.1** (2022-08-04)
|
||||
|
||||
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [Backup] Fix `incorrect backup size in metadata` on each merged VHD (PR [#6331](https://github.com/vatesfr/xen-orchestra/pull/6331))
|
||||
- [Backup] Fix `assertionError [ERR_ASSERTION]: Expected values to be strictly equal` when resuming a merge (PR [#6349](https://github.com/vatesfr/xen-orchestra/pull/6349))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/backups 0.27.3
|
||||
- @xen-orchestra/fs 2.1.0
|
||||
- @xen-orchestra/mixins 0.7.1
|
||||
- @xen-orchestra/proxy 0.25.1
|
||||
- vhd-cli 0.9.0
|
||||
- vhd-lib 3.3.5
|
||||
- xo-server 5.100.1
|
||||
- xo-server-auth-saml 0.10.0
|
||||
- xo-web 5.101.1
|
||||
|
||||
## **5.73.0** (2022-07-29)
|
||||
|
||||
### Highlights
|
||||
|
||||
- [REST API] VDI import now also supports the raw format
|
||||
- HTTPS server can acquire SSL certificate from Let's Encrypt (PR [#6320](https://github.com/vatesfr/xen-orchestra/pull/6320))
|
||||
|
||||
### Enhancements
|
||||
|
||||
- Embedded HTTP/HTTPS proxy is now enabled by default
|
||||
- [VM] Display a confirmation modal when stopping/restarting a protected VM (PR [#6295](https://github.com/vatesfr/xen-orchestra/pull/6295))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [Home/VM] Show error when deleting VMs failed (PR [#6323](https://github.com/vatesfr/xen-orchestra/pull/6323))
|
||||
- [REST API] Fix broken VDI after VHD import [#6327](https://github.com/vatesfr/xen-orchestra/issues/6327) (PR [#6326](https://github.com/vatesfr/xen-orchestra/pull/6326))
|
||||
- [Netbox] Fix `ipaddr: the address has neither IPv6 nor IPv4 format` error (PR [#6328](https://github.com/vatesfr/xen-orchestra/pull/6328))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @vates/async-each 1.0.0
|
||||
- @xen-orchestra/fs 2.0.0
|
||||
- @xen-orchestra/backups 0.27.2
|
||||
- @xen-orchestra/backups-cli 0.7.6
|
||||
- @xen-orchestra/mixins 0.7.0
|
||||
- @xen-orchestra/xapi 1.4.1
|
||||
- @xen-orchestra/proxy 0.25.0
|
||||
- vhd-cli 0.8.1
|
||||
- vhd-lib 3.3.4
|
||||
- xo-cli 0.14.1
|
||||
- xo-server 5.100.0
|
||||
- xo-web 5.101.0
|
||||
|
||||
## **5.72.1** (2022-07-11)
|
||||
|
||||
### Enhancements
|
||||
|
||||
- [SR] When SR is in maintenance, add "Maintenance mode" badge next to its name (PR [#6313](https://github.com/vatesfr/xen-orchestra/pull/6313))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [Tasks] Fix tasks not displayed when running CR backup job [Forum#6038](https://xcp-ng.org/forum/topic/6038/not-seeing-tasks-any-more-as-admin) (PR [#6315](https://github.com/vatesfr/xen-orchestra/pull/6315))
|
||||
- [Backup] Fix failing merge multiple VHDs at once (PR [#6317](https://github.com/vatesfr/xen-orchestra/pull/6317))
|
||||
- [VM/Console] Fix _Connect with SSH/RDP_ when address is IPv6
|
||||
- [Audit] Ignore side-effects free API methods `xoa.check`, `xoa.clearCheckCache` and `xoa.getHVSupportedVersions`
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/backups 0.27.0
|
||||
- @xen-orchestra/backups-cli 0.7.5
|
||||
- @xen-orchestra/proxy 0.23.5
|
||||
- vhd-lib 3.3.2
|
||||
- xo-server 5.98.1
|
||||
- xo-server-audit 0.10.0
|
||||
- xo-web 5.100.0
|
||||
|
||||
## **5.72.0** (2022-06-30)
|
||||
|
||||
### Highlights
|
||||
|
||||
- [Backup] Merge delta backups without copying data when using VHD directories on NFS/SMB/local remote(https://github.com/vatesfr/xen-orchestra/pull/6271))
|
||||
- [Proxies] Ability to copy the proxy access URL (PR [#6287](https://github.com/vatesfr/xen-orchestra/pull/6287))
|
||||
- [SR/Advanced] Ability to enable/disable _Maintenance Mode_ [#6215](https://github.com/vatesfr/xen-orchestra/issues/6215) (PRs [#6308](https://github.com/vatesfr/xen-orchestra/pull/6308), [#6297](https://github.com/vatesfr/xen-orchestra/pull/6297))
|
||||
- [User] User tokens management through XO interface (PR [#6276](https://github.com/vatesfr/xen-orchestra/pull/6276))
|
||||
- [Tasks, VM/General] Self Service users: show tasks related to their pools, hosts, SRs, networks and VMs (PR [#6217](https://github.com/vatesfr/xen-orchestra/pull/6217))
|
||||
|
||||
### Enhancements
|
||||
|
||||
> Users must be able to say: “Nice enhancement, I'm eager to test it”
|
||||
|
||||
- [Backup/Restore] Clearer error message when importing a VM backup requires XCP-n/CH >= 8.1 (PR [#6304](https://github.com/vatesfr/xen-orchestra/pull/6304))
|
||||
- [Backup] Users can use VHD directory on any remote type (PR [#6273](https://github.com/vatesfr/xen-orchestra/pull/6273))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
> Users must be able to say: “I had this issue, happy to know it's fixed”
|
||||
|
||||
- [VDI Import] Fix `this._getOrWaitObject is not a function`
|
||||
- [VM] Attempting to delete a protected VM should display a modal with the error and the ability to bypass it (PR [#6290](https://github.com/vatesfr/xen-orchestra/pull/6290))
|
||||
- [OVA Import] Fix import stuck after first disk
|
||||
- [File restore] Ignore symbolic links
|
||||
|
||||
### Released packages
|
||||
|
||||
- @vates/event-listeners-manager 1.0.1
|
||||
- @vates/read-chunk 1.0.0
|
||||
- @xen-orchestra/backups 0.26.0
|
||||
- @xen-orchestra/backups-cli 0.7.4
|
||||
- xo-remote-parser 0.9.1
|
||||
- @xen-orchestra/fs 1.1.0
|
||||
- @xen-orchestra/openflow 0.1.2
|
||||
- @xen-orchestra/xapi 1.4.0
|
||||
- @xen-orchestra/proxy 0.23.4
|
||||
- @xen-orchestra/proxy-cli 0.3.1
|
||||
- vhd-lib 3.3.1
|
||||
- vhd-cli 0.8.0
|
||||
- xo-vmdk-to-vhd 2.4.2
|
||||
- xo-server 5.98.0
|
||||
- xo-web 5.99.0
|
||||
|
||||
## **5.71.1 (2022-06-13)**
|
||||
|
||||
### Enhancements
|
||||
|
||||
- Show raw errors to administrators instead of _unknown error from the peer_ (PR [#6260](https://github.com/vatesfr/xen-orchestra/pull/6260))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [New SR] Fix `method.startsWith is not a function` when creating an _ext_ SR
|
||||
- Import VDI content now works when there is a HTTP proxy between XO and the host (PR [#6261](https://github.com/vatesfr/xen-orchestra/pull/6261))
|
||||
- [Backup] Fix `undefined is not iterable (cannot read property Symbol(Symbol.iterator))` on XS 7.0.0
|
||||
- [Backup] Ensure a warning is shown if a target preparation step fails (PR [#6266](https://github.com/vatesfr/xen-orchestra/pull/6266))
|
||||
- [OVA Export] Avoid creating a zombie task (PR [#6267](https://github.com/vatesfr/xen-orchestra/pull/6267))
|
||||
- [OVA Export] Increase speed by lowering compression to acceptable level (PR [#6267](https://github.com/vatesfr/xen-orchestra/pull/6267))
|
||||
- [OVA Export] Fix broken OVAs due to special characters in VM name (PR [#6267](https://github.com/vatesfr/xen-orchestra/pull/6267))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/backups 0.25.0
|
||||
- @xen-orchestra/backups-cli 0.7.3
|
||||
- xen-api 1.2.1
|
||||
- @xen-orchestra/xapi 1.2.0
|
||||
- @xen-orchestra/proxy 0.23.2
|
||||
- @xen-orchestra/proxy-cli 0.3.0
|
||||
- xo-cli 0.14.0
|
||||
- xo-vmdk-to-vhd 2.4.1
|
||||
- xo-server 5.96.0
|
||||
- xo-web 5.97.2
|
||||
|
||||
## **5.71.0 (2022-05-31)**
|
||||
|
||||
### Highlights
|
||||
|
||||
- [Backup] _Restore Health Check_ can now be configured to be run automatically during a backup schedule (PRs [#6227](https://github.com/vatesfr/xen-orchestra/pull/6227), [#6228](https://github.com/vatesfr/xen-orchestra/pull/6228), [#6238](https://github.com/vatesfr/xen-orchestra/pull/6238) & [#6242](https://github.com/vatesfr/xen-orchestra/pull/6242))
|
||||
- [Backup] VMs with USB Pass-through devices are now supported! The advanced _Offline Snapshot Mode_ setting must be enabled. For Full Backup or Disaster Recovery jobs, Rolling Snapshot needs to be anabled as well. (PR [#6239](https://github.com/vatesfr/xen-orchestra/pull/6239))
|
||||
- [Backup] Implement file cache for listing the backups of a VM (PR [#6220](https://github.com/vatesfr/xen-orchestra/pull/6220))
|
||||
- [RPU/Host] If some backup jobs are running on the pool, ask for confirmation before starting an RPU, shutdown/rebooting a host or restarting a host's toolstack (PR [6232](https://github.com/vatesfr/xen-orchestra/pull/6232))
|
||||
- [XO Web] Add ability to configure a default filter for Storage [#6236](https://github.com/vatesfr/xen-orchestra/issues/6236) (PR [#6237](https://github.com/vatesfr/xen-orchestra/pull/6237))
|
||||
- [REST API] Support VDI creation via VHD import
|
||||
|
||||
### Enhancements
|
||||
|
||||
- [Backup] Merge multiple VHDs at once which will speed up the merging phase after reducing the retention of a backup job(PR [#6184](https://github.com/vatesfr/xen-orchestra/pull/6184))
|
||||
- [Backup] Add setting `backups.metadata.defaultSettings.unconditionalSnapshot` in `xo-server`'s configuration file to force a snapshot even when not required by the backup, this is useful to avoid locking the VM halted during the backup (PR [#6221](https://github.com/vatesfr/xen-orchestra/pull/6221))
|
||||
- [VM migration] Ensure the VM can be migrated before performing the migration to avoid issues [#5301](https://github.com/vatesfr/xen-orchestra/issues/5301) (PR [#6245](https://github.com/vatesfr/xen-orchestra/pull/6245))
|
||||
- [Backup] Show any detected errors on existing backups instead of fixing them silently (PR [#6207](https://github.com/vatesfr/xen-orchestra/pull/6225))
|
||||
- Created SRs will now have auto-scan enabled similarly to what XenCenter does (PR [#6246](https://github.com/vatesfr/xen-orchestra/pull/6246))
|
||||
- [RPU] Disable scheduled backup jobs during RPU (PR [#6244](https://github.com/vatesfr/xen-orchestra/pull/6244))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [S3] Fix S3 remote with empty directory not showing anything to restore (PR [#6218](https://github.com/vatesfr/xen-orchestra/pull/6218))
|
||||
- [S3] remote fom did not save the `https` and `allow unatuhorized`during remote creation (PR [#6219](https://github.com/vatesfr/xen-orchestra/pull/6219))
|
||||
- [VM/advanced] Fix various errors when adding ACLs [#6213](https://github.com/vatesfr/xen-orchestra/issues/6213) (PR [#6230](https://github.com/vatesfr/xen-orchestra/pull/6230))
|
||||
- [Home/Self] Don't make VM's resource set name clickable for non admin users as they aren't allowed to view the Self Service page (PR [#6252](https://github.com/vatesfr/xen-orchestra/pull/6252))
|
||||
- [load-balancer] Fix density mode failing to shutdown hosts (PR [#6253](https://github.com/vatesfr/xen-orchestra/pull/6253))
|
||||
- [Health] Make "Too many snapshots" table sortable by number of snapshots (PR [#6255](https://github.com/vatesfr/xen-orchestra/pull/6255))
|
||||
- [Remote] Show complete errors instead of only a potentially missing message (PR [#6216](https://github.com/vatesfr/xen-orchestra/pull/6216))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/self-signed 0.1.3
|
||||
- vhd-lib 3.2.0
|
||||
- @xen-orchestra/fs 1.0.3
|
||||
- vhd-cli 0.7.2
|
||||
- xo-vmdk-to-vhd 2.4.0
|
||||
- @xen-orchestra/upload-ova 0.1.5
|
||||
- @xen-orchestra/xapi 1.1.0
|
||||
- @xen-orchestra/backups 0.24.0
|
||||
- @xen-orchestra/backups-cli 0.7.2
|
||||
- @xen-orchestra/emit-async 1.0.0
|
||||
- @xen-orchestra/mixins 0.5.0
|
||||
- @xen-orchestra/proxy 0.23.1
|
||||
- xo-server 5.95.0
|
||||
- xo-web 5.97.1
|
||||
- xo-server-backup-reports 0.17.0
|
||||
|
||||
## 5.70.2 (2022-05-16)
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [Pool/Patches] Fix failure to install patches on Citrix Hypervisor (PR [#6231](https://github.com/vatesfr/xen-orchestra/pull/6231))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/xapi 1.0.0
|
||||
- @xen-orchestra/backups 0.23.0
|
||||
- @xen-orchestra/mixins 0.4.0
|
||||
- @xen-orchestra/proxy 0.22.1
|
||||
- xo-server 5.93.1
|
||||
|
||||
## 5.70.1 (2022-05-04)
|
||||
|
||||
### Enhancement
|
||||
|
||||
- [Backup] Support `[NOBAK]` VDI prefix for all backup modes [#2560](https://github.com/vatesfr/xen-orchestra/issues/2560) (PR [#6207](https://github.com/vatesfr/xen-orchestra/pull/6207))
|
||||
- [VM/Host Console] Fix fallback for older versions of XCP-ng/XS (PR [#6203](https://github.com/vatesfr/xen-orchestra/pull/6203))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [Backup Health Check] Fix guest tools detection (PR [#6214](https://github.com/vatesfr/xen-orchestra/pull/6214))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @xen-orchestra/mixins 0.3.1
|
||||
- @xen-orchestra/xapi 0.11.0
|
||||
- @xen-orchestra/backups 0.22.0
|
||||
- @xen-orchestra/proxy 0.22.0
|
||||
- xo-server 5.93.0
|
||||
|
||||
## 5.70.0 (2022-04-29)
|
||||
|
||||
### Highlights
|
||||
|
||||
- [VM export] Feat export to `ova` format (PR [#6006](https://github.com/vatesfr/xen-orchestra/pull/6006))
|
||||
- [Backup] Add _Restore Health Check_: ensure a backup is viable by doing an automatic test restore (requires guest tools in the VM) (PR [#6148](https://github.com/vatesfr/xen-orchestra/pull/6148))
|
||||
- [Import] Feat import `iso` disks (PR [#6180](https://github.com/vatesfr/xen-orchestra/pull/6180))
|
||||
- New HTTP/HTTPS proxy implemented in xo-proxy and xo-server, [see the documentation](https://github.com/vatesfr/xen-orchestra/blob/master/@xen-orchestra/mixins/docs/HttpProxy.md) (PR [#6201](https://github.com/vatesfr/xen-orchestra/pull/6201))
|
||||
- [Backup job] Cache DNS queries (PR [#6196](https://github.com/vatesfr/xen-orchestra/pull/6196))
|
||||
|
||||
### Enhancements
|
||||
|
||||
- [VM migrate] Allow to choose a private network for VIFs network (PR [#6200](https://github.com/vatesfr/xen-orchestra/pull/6200))
|
||||
- [Proxy] Disable "Deploy proxy" button for source users (PR [#6199](https://github.com/vatesfr/xen-orchestra/pull/6199))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [VM/Host Console] Fix support of older versions of XCP-ng/XS, please not that HTTP proxies are note supported in that case (PR [#6191](https://github.com/vatesfr/xen-orchestra/pull/6191))
|
||||
- Fix HTTP proxy support to connect to pools (introduced in XO 5.69.0) (PR [#6204](https://github.com/vatesfr/xen-orchestra/pull/6204))
|
||||
- [Backup] Fix failure when sending a backup (Full/Delta/Metadata) to S3 with Object Lock enabled (PR [#6190](https://github.com/vatesfr/xen-orchestra/pull/6190))
|
||||
|
||||
### Released packages
|
||||
|
||||
- @vates/cached-dns.lookup 1.0.0
|
||||
- @vates/event-listeners-manager 1.0.0
|
||||
- xen-api 1.2.0
|
||||
- @xen-orchestra/mixins 0.3.0
|
||||
- xo-vmdk-to-vhd 2.3.0
|
||||
- @xen-orchestra/fs 1.0.1
|
||||
- @xen-orchestra/backups 0.21.1
|
||||
- @xen-orchestra/proxy 0.21.0
|
||||
- xo-server 5.92.0
|
||||
- xo-web 5.96.0
|
||||
- vhd-cli 0.7.1
|
||||
- @xen-orchestra/backups-cli 0.7.1
|
||||
|
||||
## **5.69.2** (2022-04-13)
|
||||
|
||||
### Enhancements
|
||||
|
||||
- [Rolling Pool Update] New algorithm for XCP-ng updates (PR [#6188](https://github.com/vatesfr/xen-orchestra/pull/6188))
|
||||
|
||||
### Bug fixes
|
||||
@@ -73,8 +369,6 @@
|
||||
|
||||
## **5.68.0** (2022-02-28)
|
||||
|
||||
<img id="stable" src="https://badgen.net/badge/channel/stable/green" alt="Channel: stable" />
|
||||
|
||||
### Highlights
|
||||
|
||||
- [New SR] Add confirmation message before creating local SR (PR [#6121](https://github.com/vatesfr/xen-orchestra/pull/6121))
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user