See https://xcp-ng.org/forum/topic/2752/jobs-not-running-error-no-value-for-user_ip/5
Context: The audit plugin needs to log the User IP to identify which user has executed an action. This IP is got from the client request, unfortunately, there is no client request for scheduled jobs and this case is not handled by the audit. It will be done in #4844.
This is not very clean but I dont want to add a `level` dependency to `xo-server` which might have a different version of the one provided by `level-party`.
Merging multiple VHDs is currently a slow process which could be optimized by doing a single merge of a synthetic delta.
Until this is implement, the number of garbage collected deltas should be limited to avoid taking too much time (and possibly interrupted jobs) in case of an important retention change.
The new version breaks some features like VDI import.
This will be fixed this directly in `http-request-plus` but in the meantime 0.8 will be used.
Fixes#4640
Fixes#4124
Related to xoa-support#1921
Only configurable globally (not per backup job) for the moment with the `xapiOptions.maxUncoalescedVdis` setting.
See #4539
This PR gives the possibility to clear the backup list cache using the API and it makes sure that the cache is cleaned after a backup or its deletion on a remote.
See #4615
Introduced by fd06374365
- Every time we checked if the Cloud plugin was installed, we now need to check
if the XOA plugin is installed
- Update the warning messages mentioning the Cloud plugin
See a6d182e
The issue:
xo-server-test execute sequentially multiple backups and validate their execution using logs. The issue is that the logs are not up-to-date in the second fetch because the method returns the previous cached response.
The solution:
Add _forceRefresh param to backupNg.getLogs to be able to clear the method cache. It's just a work-around which will be removed once the consolidated logs will be stored in the DB.
Similar to #4509
Fixes xoa-support#1676
For now, the delay is set to 10s which is the duration used by xo-web's
subscription, which makes it enough to make it independent of the number of
clients.
In the future, this could be configurable, but we may simply do the
consolidation only once during the backup execution.
Fixes xoa-support#1676
For now, the delay is set to 10s which is the duration used by xo-web's
subscription, which makes it enough to make it independent of the number of
clients.
In the future, this should probably be longer and configurable, but this
requires invalidating the cache in case of changes (creation/deletion of
backups) and explicit requests from the users, which will be implemented in
another PR.
Before:
```
✖ invalid parameters
JsonRpcError: invalid parameters
at Peer._callee$ (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/dist/index.js:137:44)
at tryCatch (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/regenerator-runtime/runtime.js:62:40)
at Generator.invoke [as _invoke] (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/regenerator-runtime/runtime.js:288:22)
at Generator.prototype.(anonymous function) [as next] (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/regenerator-runtime/runtime.js:114:21)
at asyncGeneratorStep (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24)
at _next (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/@babel/runtime/helpers/asyncToGenerator.js:25:9)
at /usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/@babel/runtime/helpers/asyncToGenerator.js:32:7
at new Promise (<anonymous>)
at Peer.<anonymous> (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/node_modules/@babel/runtime/helpers/asyncToGenerator.js:21:12)
at Peer.exec (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/dist/index.js:180:20)
at Peer.write (/usr/local/lib/node_modules/xen-api/node_modules/json-rpc-peer/dist/index.js:284:10)
at Xo.<anonymous> (/usr/local/lib/node_modules/xen-api/node_modules/jsonrpc-websocket-client/dist/index.js:77:12)
at emitOne (events.js:116:13)
at Xo.emit (events.js:211:7)
at WebSocket.<anonymous> (/usr/local/lib/node_modules/xen-api/node_modules/jsonrpc-websocket-client/dist/websocket-client.js:206:18)
at WebSocket.onMessage (/usr/local/lib/node_modules/xen-api/node_modules/ws/lib/event-target.js:120:16)
```
After:
```
✖ invalid parameters
property @: should not contains property ["foo"]
property @.id: is missing and not optional
```
So far, in order to know if we needed to use the "new" patching system, we were checking that the pool master's `software_version.platform_version` satisfied `^2.1.1` (instead of `>=2.1.1` like [XenCenter does](f3a64fc54b/XenModel/Utils/Helpers.cs (L420))). This broke the installation and the display of missing patches for CH 8.0 whose `software_version.platform_version` is `3.0.0`.
- Always use .xo-status-* classes
- Show host busy state in the host view as well (previously only in home/hosts view)
- Differentiate "disabled" state from "busy" state
- Host view state icon tooltip
- Homogenize state display between hosts and VMs
Fixes#3928
- VM operators were only able to revert the snapshots they created. They can now revert any snapshot
- VM operators can still only delete the snapshots they created
- VM admins still have full control over the snapshots
Introduced by 59e68682bd
`@routes` must always be on top because it decorates add a `routes` property to the component which will be used by the parent.
Fixes#4226
`.callAsync` is more robust to disconnections than `.call` and should be used for all non-instantaneous calls.
Unfortunately the result can be embedded into XML, you should either not use the result or add `.then(extractOpaquerRef)` if you are expecting an opaque ref.
XenStore can be used to pass XenServers to connect to to xo-server:
`vm-data/xen-servers` entry:
```json
[
{
"allowUnauthorized": true,
"host": "xs1.company.tld",
"label": "My XenServer",
"password": "super%secret+password",
"readOnly": false,
"username": "root"
}
]
```
XenStore can be used to pass the credentials of the default admin account to xo-server:
`vm-data/admin-account` entry:
```json
{ "email": "admin@admin.net", "password": "admin" }
```
See #3911
Show a warning message if the VM has a VDI sitting on a local SR and the user
select a VDI sitting on a local SR on a different host since the VM won't be
able to start
See #3911
- New disk: warning if the selected SR is local to another host than another VDI
- Migrate VDI (row action only): warning if the selected SR is local to another host than another VDI
- fix uncompleted log if one of the backup fails
- fix the case of a backup with xo mode and pool mode, if one fails the other will not be executed.
- log xo/pool by remotes
- log a warning task if a pool or a remote is missed
- log a warning task if a backup is not properly deleted
See #2565
See #3655Fixes#2188Fixes#3777Fixes#3783Fixes#3934
Fixes support#1228
Fixes support#1338
Fixes support#1362
- mergeInto: fix auto-patching on XS < 7.2
- mergeInto: homogenize both the host and pool's patches
- correctly install specific patches
- XCP-ng: fix "xcp-ng-updater not installed" bug
See #3911
Show a warning message when at least 2 VDIs attached to the VM are on 2 local SRs on 2 different hosts because the VM won't be able to start (NO_HOSTS_AVAILABLE)
When our tasks cache is desynchronized we re-fetch all tasks.
We must wait before fetching the next events to have fetch the tasks otherwise we risk a race condition.
Previous implementation relied on events but had issues where it did not correctly detect and remove broken quiesced snapshot.
The new implementation is less magical and does not rely on events at all.
- fetch each types independently: no more huge requests
- only fall back to legacy implementation if `event.inject` is not available
- can only watch certain types
- `Xapi#objectsFetched` is a promise which resolves when objects have been fetched
XenApi event system returns lowercased types which things difficult, for
instance, `Record#set_name_label` methods did not work for some VM
because the lib called `vm.set_name_label` instead of
`VM.set_name_label`.
To work-around this problem, a map of types from lowercased is
constructed at connection.
Fixes#3962
- Parses the OVF XML without taking into account any namespace.
- Empty the import screen when we drop a new file on the drop zone to avoid displaying stale information during long parsing.
> This file contains all changes that have not been released yet.
>
> Keep in mind the changelog is addressed to **users** and should be
> understandable by them.
### Enhancements
- [Home] Set description on bulk snapshot [#3925](https://github.com/vatesfr/xen-orchestra/issues/3925) (PR [#3933](https://github.com/vatesfr/xen-orchestra/pull/3933))
- Work-around the XenServer issue when `VBD#VDI` is an empty string instead of an opaque reference (PR [#3950](https://github.com/vatesfr/xen-orchestra/pull/3950))
- [VDI migration] Retry when XenServer fails with `TOO_MANY_STORAGE_MIGRATES` (PR [#3940](https://github.com/vatesfr/xen-orchestra/pull/3940))
- [VM]
- [General] The creation date of the VM is now visible [#3932](https://github.com/vatesfr/xen-orchestra/issues/3932) (PR [#3947](https://github.com/vatesfr/xen-orchestra/pull/3947))
- [Disks] Display device name [#3902](https://github.com/vatesfr/xen-orchestra/issues/3902) (PR [#3946](https://github.com/vatesfr/xen-orchestra/pull/3946))
- [VM Snapshotting]
- Detect and destroy broken quiesced snapshot left by XenServer [#3936](https://github.com/vatesfr/xen-orchestra/issues/3936) (PR [#3937](https://github.com/vatesfr/xen-orchestra/pull/3937))
- Retry twice after a 1 minute delay if quiesce failed [#3938](https://github.com/vatesfr/xen-orchestra/issues/3938) (PR [#3952](https://github.com/vatesfr/xen-orchestra/pull/3952))
> Users must be able to say: “Nice enhancement, I'm eager to test it”
- [Backup] **BETA** Ability to backup running VMs with their memory [#645](https://github.com/vatesfr/xen-orchestra/issues/645) (PR [#4252](https://github.com/vatesfr/xen-orchestra/pull/4252))
### Bug fixes
- [Import] Fix import of big OVA files
- [Host] Show the host's memory usage instead of the sum of the VMs' memory usage (PR [#3924](https://github.com/vatesfr/xen-orchestra/pull/3924))
- [SAML] Make `AssertionConsumerServiceURL` matches the callback URL
- [ ] PR reference the relevant issue (e.g. `Fixes #007` or `See xoa-support#42`)
- [ ] if UI changes, a screenshot has been added to the PR
- [ ]`CHANGELOG.unreleased.md`:
- enhancement/bug fix entry added
- list of packages to release updated (`${name} v${new version}`)
- [ ] documentation updated
- [ ]**I have tested added/updated features** (and impacted code)
- `CHANGELOG.unreleased.md`:
- [ ] enhancement/bug fix entry added
- [ ] list of packages to release updated (`${name} v${new version}`)
- **I have tested added/updated features** (and impacted code)
- [ ] unit tests (e.g. [`cron/parse.spec.js`](https://github.com/vatesfr/xen-orchestra/blob/b24400b21de1ebafa1099c56bac1de5c988d9202/%40xen-orchestra/cron/src/parse.spec.js))
- [ ] if `xo-server` API changes, the corresponding test has been added to/updated on [`xo-server-test`](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server-test)
- [ ] at least manual testing
### Process
1. create a PR as soon as possible
1. mark it as `WiP:` (Work in Progress) if not ready to be merged
1. when you want a review, add a reviewer
1. when you want a review, add a reviewer (and only one)
1. if necessary, update your PR, and re- add a reviewer
From [_the Four Agreements_](https://en.wikipedia.org/wiki/Don_Miguel_Ruiz#The_Four_Agreements):
@@ -9,23 +8,23 @@ XO is a web interface to visualize and administer your XenServer (or XAPI enable
It aims to be easy to use on any device supporting modern web technologies (HTML 5, CSS 3, JavaScript), such as your desktop computer or your smartphone.
Xen Orchestra (XO) is software built with a server and clients, such as the web client `xo-web`, but also a CLI capable client, called `xo-cli`.
@@ -7,7 +6,7 @@ Xen Orchestra (XO) is software built with a server and clients, such as the web
## XOA
*Xen Orchestra Virtual Appliance* (XOA) is a virtual machine with Xen Orchestra already installed, thus working out-of-the-box.
_Xen Orchestra Virtual Appliance_ (XOA) is a virtual machine with Xen Orchestra already installed, thus working out-of-the-box.
This is the easiest way to try Xen Orchestra quickly.
@@ -20,9 +19,9 @@ Your XOA is connected to all your hosts, or the pool master only if you are usin

Xen Orchestra itself is built as a modular solution. Each part has its role:
- The core is "[xo-server](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server/)" - a daemon dealing directly with XenServer or XAPI capable hosts. This is where users are stored, and it's the center point for talking to your whole Xen infrastructure.
- The web interface is "[xo-web](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-web)" - it runs directly from your browser. The connection with ```xo-server``` is done via *WebSockets*.
- The web interface is "[xo-web](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-web)" - it runs directly from your browser. The connection with `xo-server` is done via _WebSockets_.
- "[xo-cli](https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-cli)" is a module allowing you to send commands directly from the command line.
We have other modules as well (like the LDAP plugin for example). It allows us to use this modular architecture to add further parts as we need them. It's completely flexible, allowing us to adapt Xen Orchestra to every existing workflow.
Xen Orchestra supports various types of user authentication, internal or even external thanks to the usage of the [Passport library](http://passportjs.org/).
There are 2 types of XO users:
* admins, with all rights on all connected resources
* users, with no rights by default
- admins, with all rights on all connected resources
- users, with no rights by default
All users will land on the "flat" view, which displays no hierarchy, only all their visible objects (or no object if they are not configured).
ACLs will thus apply only to "users".
> Any account created by an external authentication process (LDAP, SSO...) will be a **user** without any permission.
@@ -6,15 +6,15 @@ At the end of a backup job, you can configure Xen Orchestra to send backup repor
### Step-by-step
1. On the "settings/plugins" view you have to activate and configure the "Backup-reports" plugin.

1. On the "settings/plugins" view you have to activate and configure the "Backup-reports" plugin.

2. Still in the plugins view, you also have to configure the "transport-email" plugin.


3. Once it's done, you can now create your backup job. In the "report" selection you can choose the situation in wish you want to receive an email (always, never or failure).

> Note: You can also modify existing backup jobs and change the behaviour of the report system.
3. Once it's done, you can now create your backup job. In the "report" selection you can choose the situation in wish you want to receive an email (always, never or failure).

> Note: You can also modify existing backup jobs and change the behaviour of the report system.
## XMPP nofications
@@ -67,9 +67,9 @@ Like all other xo-server plugins, it can be configured directly via the web inte
@@ -12,33 +12,37 @@ Another good way to check if there is activity is the XOA VM stats view (on the
### VDI chain protection
This means your previous VM disks and snapshots should be "merged" (*coalesced* in the XenServer world) before we can take a new snapshot. This mechanism is handled by XenServer itself, not Xen Orchestra. But we can check your existing VDI chain and avoid creating more snapshots than your storage can merge. Otherwise, this will lead to catastrophic consequences. Xen Orchestra is the **only** XenServer/XCP backup product dealing with this.
Backup jobs regularly delete snapshots. When a snapshot is deleted, either manually or via a backup job, it triggers the need for Xenserver to coalesce the VDI chain - to merge the remaining VDIs and base copies in the chain. This means generally we cannot take too many new snapshots on said VM until Xenserver has finished running a coalesce job on the VDI chain.
This mechanism and scheduling is handled by XenServer itself, not Xen Orchestra. But we can check your existing VDI chain and avoid creating more snapshots than your storage can merge. If we don't, this will lead to catastrophic consequences. Xen Orchestra is the **only** XenServer/XCP backup product that takes this into account and offers protection.
Without this detection, you could have 2 potential issues:
*`The Snapshot Chain is too Long`
*`SR_BACKEND_FAILURE_44 (insufficient space)`
-`The Snapshot Chain is too Long`
-`SR_BACKEND_FAILURE_44 (insufficient space)`
The first issue is a chain that contains more than 30 elements (fixed XenServer limit), and the other one means it's full because the "coalesce" process couldn't keep up the pace and the storage filled up.
In the end, this message is a **protection mechanism against damaging your SR**. The backup job will fail, but XenServer itself should eventually automatically coalesce the snapshot chain, and the the next time the backup job should complete.
In the end, this message is a **protection mechanism preventing damage to your SR**. The backup job will fail, but XenServer itself should eventually automatically coalesce the snapshot chain, and the the next time the backup job should complete.
Just remember this: **coalesce will happen every time a snapshot is removed**.
Just remember this: **a coalesce should happen every time a snapshot is removed**.
> You can read more on this on our dedicated blog post regarding [XenServer coalesce detection](https://xen-orchestra.com/blog/xenserver-coalesce-detection-in-xen-orchestra/).
### Troubleshooting a constant VDI Chain Protection message (XenServer failure to coalesce)
As previously mentioned, this message can be normal and it just means XenServer needs to perform a coalesce to merge old snapshots. However if you repeatedly get this message and it seems XenServer is not coalescing, You can take a few steps to determine why.
As previously mentioned, this message can be normal and it just means XenServer needs to perform a coalesce to merge old snapshots. However if you repeatedly get this message and it seems XenServer is not coalescing, You can take a few steps to determine why.
First check SMlog on the XenServer host for messages relating to VDI corruption or coalesce job failure. For example, by running `cat /var/log/SMlog | grep -i exception` or `cat /var/log/SMlog | grep -i error` on the XenServer host with the affected storage.
First check SMlog on the XenServer host for messages relating to VDI corruption or coalesce job failure. For example, by running `cat /var/log/SMlog | grep -i exception` or `cat /var/log/SMlog | grep -i error` on the XenServer host with the affected storage.
Coalesce jobs can also fail to run if the SR does not have enough free space. Check the problematic SR and make sure it has enough free space, generally 30% or more free is recommended depending on VM size. You can check if this is the issue by searching `SMlog` with `grep -i coales /var/log/SMlog` (you may have to look at previous logs such as `SMlog.1`).
You can check if a coalesce job is currently active by running `ps axf | grep vhd` on the XenServer host and looking for a VHD process in the results (one of the resulting processes will be the grep command you just ran, ignore that one).
You can check if a coalesce job is currently active by running `ps axf | grep vhd` on the XenServer host and looking for a VHD process in the results (one of the resulting processes will be the grep command you just ran, ignore that one).
If you don't see any running coalesce jobs, and can't find any other reason that XenServer has not started one, you can attempt to make it start a coalesce job by rescanning the SR. This is harmless to try, but will not always result in a coalesce. Visit the problematic SR in the XOA UI, then click the "Rescan All Disks" button towards the top right: it looks like a refresh circle icon. This should begin the coalesce process - if you click the Advanced tab in the SR view, the "disks needing to be coalesced" list should become smaller and smaller.
As a last resort, migrating the VM (more specifically, its disks) to a new storage repository will also force a coalesce and solve this issue. That means migrating a VM to another host (with its own storage) and back will force the VDI chain for that VM to be coalesced, and get rid of the `VDI Chain Protection` message.
### Parse Error
This is most likely due to running a backup job that uses Delta functionality (eg: delta backups, or continuous replication) on a version of XenServer older than 6.5. To use delta functionality you must run [XenServer 6.5 or later](https://xen-orchestra.com/docs/supported-version.html).
@@ -51,10 +55,10 @@ The Storage Repository (where your VM disks are currently stored) is full. Note
Workarounds:
* use a thin provisioned SR (local ext, NFS, XOSAN)
* wait for Citrix to release thin provisioning on LVM
* wait for Citrix to allow another mechanism besides snapshot to be able to export disks
* use less than 50% of SR space or don't backup all VMs
- use a thin provisioned SR (local ext, NFS, XOSAN)
- wait for Citrix to release thin provisioning on LVM
- wait for Citrix to allow another mechanism besides snapshot to be able to export disks
- use less than 50% of SR space or don't backup all VMs
### Could not find the base VM
@@ -68,12 +72,12 @@ To solve it, you have to change a parameter in your VM. `xe vm-param-set has-ven
### ENOSPC: no space left on device
This message appears when you do not have enough free space on the target remote when running a backup to it.
This message appears when you do not have enough free space on the target remote when running a backup to it.
To check your free space, enter your XOA and run `xoa check` to check free system space and `df -h` to check free space on your chosen remote storage.
### Error: no VMs match this pattern
This is happening when you have a *smart backup job* that doesn't match any VMs. For example: you created a job to backup all running VMs. If no VMs are running on backup schedule, you'll have this message. This could also happen if you lost connection with your pool master (the VMs aren't visible anymore from Xen Orchestra).
This is happening when you have a _smart backup job_ that doesn't match any VMs. For example: you created a job to backup all running VMs. If no VMs are running on backup schedule, you'll have this message. This could also happen if you lost connection with your pool master (the VMs aren't visible anymore from Xen Orchestra).
Edit your job and try to see matching VMs or check if your pool is connected to XOA.
> Don't forget to take a look at the [backup troubleshooting](backup_troubleshooting.md) section. You can also take a look at the [backup reports](backup_reports.md) section for configuring notifications.
@@ -41,7 +42,7 @@ Each backups' job execution is identified by a `runId`. You can find this `runId
All backup types rely on snapshots. But what about data consistency? By default, Xen Orchestra will try to take a **quiesced snapshot** every time a snapshot is done (and fall back to normal snapshots if it's not possible).
Snapshots of Windows VMs can be quiesced (especially MS SQL or Exchange services) after you have installed Xen Tools in your VMs. However, [there is an extra step to install the VSS provider on windows](quiesce). A quiesced snapshot means the operating system will be notified and the cache will be flushed to disks. This way, your backups will always be consistent.
Snapshots of Windows VMs can be quiesced (especially MS SQL or Exchange services) after you have installed Xen Tools in your VMs. However, [there is an extra step to install the VSS provider on windows](https://xen-orchestra.com/blog/xenserver-quiesce-snapshots/). A quiesced snapshot means the operating system will be notified and the cache will be flushed to disks. This way, your backups will always be consistent.
To see if you have quiesced snapshots for a VM, just go into its snapshot tab, then the "info" icon means it is a quiesced snapshot:
@@ -53,16 +54,15 @@ The tooltip confirms this:
## Remotes
> Remotes are places where your *backup* and *delta backup* files will be stored.
> Remotes are places where your _backup_ and _delta backup_ files will be stored.
To add a *remote*, go to the **Settings/Remotes** menu.
To add a _remote_, go to the **Settings/Remotes** menu.
Supported remote types:
* Local (any folder in XOA filesystem)
* NFS
* SMB (CIFS)
- Local (any folder in XOA filesystem)
- NFS
- SMB (CIFS)
> **WARNING**: the initial "/" or "\\" is automatically added.
@@ -72,15 +72,15 @@ On your NFS server, authorize XOA's IP address and permissions for subfolders. T
### SMB
We support SMB storage on *Windows Server 2012 R2*.
We support SMB storage on _Windows Server 2012 R2_.
> WARNING: For continuous delta backup, SMB is **NOT** recommended (or only for small VMs, eg < 50GB)
Also, read the UI twice when you add an SMB store. If you have:
*`192.168.1.99` as SMB host
*`Backups` as folder
* no subfolder
-`192.168.1.99` as SMB host
-`Backups` as folder
- no subfolder
You'll have to fill it like this:
@@ -111,13 +111,13 @@ All your scheduled backups are acccessible in the "Restore" view in the backup s
## About backup compression
By default, *Backups* are compressed (using GZIP, done on XenServer side). There is no absolute rule but in general uncompressed backups are faster (but sometimes much larger).
By default, _Backups_ are compressed (using GZIP, done on XenServer side). There is no absolute rule but in general uncompressed backups are faster (but sometimes much larger).
XenServer uses Gzip compression, which is:
* slow (single threaded)
* space efficient
* consumes less bandwidth (helpful if your NFS share is far away)
- slow (single threaded)
- space efficient
- consumes less bandwidth (helpful if your NFS share is far away)
If you have compression on your NFS share (or destination filesystem like ZFS), you can disable compression in Xen Orchestra.
@@ -151,4 +151,4 @@ Replicated VMs HA are taken into account by XS/XCP-ng. To avoid the resultant tr


> The tag won't be automatically removed by XO on the replicated VMs, even if HA is re-enabled.
> The tag won't be automatically removed by XO on the replicated VMs, even if HA is re-enabled.
By default, Backup are compressed (using GZIP, done in XenServer side). There is no absolute rule about using compression or not, but there is some rules.
Gzip compression is:
* slow
* space efficient
* consume less bandwidth (if your NFS share is far)
- slow
- space efficient
- consume less bandwidth (if your NFS share is far)
If you have compression on your NFS share (or destination file-system like ZFS), you can disable compression in Xen Orchestra.
@@ -45,4 +44,4 @@ This feature is close to Backups, but it creates a snapshot when planned to do s
**Warning**: snapshots are not backups. They help to rollback to a previous state, but all snapshots are on the same Storage than their original disk. If you lose the original VDI (or the SR), you'll **lose all your snapshots**.
[Read more about it](https://xen-orchestra.com/blog/xen-orchestra-4-2/#schedulerollingsnapshots).
[Read more about it](https://xen-orchestra.com/blog/xen-orchestra-4-2/#schedulerollingsnapshots).
Cloud-init is a program "that handles the early initialization of a cloud instance"[^n]. In other words, you can, on a "cloud-init"-ready template VM, pass a lot of data at first boot:
* setting the hostname
* add ssh keys
* automatically grow the file system
* create users
* and a lot more!
- setting the hostname
- add ssh keys
- automatically grow the file system
- create users
- and a lot more!
This tool is pretty standard and used everywhere. A lot of existing cloud templates are using it.
@@ -40,19 +40,19 @@ Finally, create the VM:
Now start the VM and SSH to its IP:
* **the system has the right VM hostname** (from VM name)
* you don't need to use a password to access it (thanks to your SSH key):
- **the system has the right VM hostname** (from VM name)
- you don't need to use a password to access it (thanks to your SSH key):
```
$ ssh centos@192.168.100.226
[centos@tmp-app1 ~]$
[centos@tmp-app1 ~]$
```
The default `cloud-init` configuration can allow you to be to be a sudoer directly:
```
[centos@tmp-app1 ~]$ sudo -s
[root@tmp-app1 centos]#
[root@tmp-app1 centos]#
```
Check the root file system size: indeed, **it was automatically increased** to what you need:
Xen Orchestra 5.20 introduces new tools to manage backup concurrency. Below is an overview of the backup process and ways you can control concurrency in your own environment.
## Backup process
@@ -15,14 +14,14 @@ Xen Orchestra will fetch the content of the snapshot made in step 1. This operat
### 3. Snapshot removal
When it's done exporting, we'll remove the snapshot. Note: this operation will trigger a coalesce on your storage in the near future.
When it's done exporting, we'll remove the snapshot. Note: this operation will trigger a coalesce on your storage in the near future (a coalesce is required every time a snapshot is removed).
## Concurrency
Let's say you want to backup 50 VMs (each with 1x disk) at 3:00 AM. There are **2 different strategies**:
1. backup VM #1 (snapshot, export, delete snapshots) **then** backup VM #2 -> *fully sequential strategy*
2. snapshot all VMs, **then** export all snapshots, **then** delete all snapshots for finished exports -> *fully parallel strategy*
1. backup VM #1 (snapshot, export, delete snapshots) **then** backup VM #2 -> _fully sequential strategy_
2. snapshot all VMs, **then** export all snapshots, **then** delete all snapshots for finished exports -> _fully parallel strategy_
The first purely sequential strategy will lead to a big problem: **you can't predict when a snapshot of your data will occur**. Because you can't predict the first VM export time (let's say 3 hours), then your second VM will have its snapshot taken 3 hours later, at 6 AM. We assume that's not what you meant when you specified "backup everything at 3 AM". You would end up with data from 6 AM (and later) for other VMs.
@@ -32,19 +31,19 @@ So what's the best choice? Continue below to learn how to best configure concurr
### Best choice
By default the *parallel strategy* is, on paper, the most logical one. But we need to give it some limits on concurrency.
By default the _parallel strategy_ is, on paper, the most logical one. But we need to give it some limits on concurrency.
> Note: Xen Orchestra can be connected to multiple pools at once. So the concurrency number applies **per pool**.
Each step has its own concurrency to fit its requirements:
* **snapshot process** needs to be performed with the lowest concurrency possible. 2 is a good compromise: one snapshot is fast, but a stuck snapshot won't block the whole job. That's why a concurrency of 2 is not too bad on your storage. Basically, at 3 AM, we'll do all the VM snapshots needed, 2 at a time.
* **disk export process** is bottlenecked by XCP-ng/XenServer - so to get the most of it, you can use up to 12 in parallel. As soon a snapshot is done, the export process will start, until reaching 12 at once. Then as soon as one in those 12 is finished, another one will appear until there is nothing more to export.
* **snapshot deletion** can't happen all at once because the previous step durations are random - no need to implement concurrency on this one.
This is how it currently works in Xen Orchestra. But sometimes, you also want to have *sequential* backups combined with the *parallel strategy*. That's why we introduced a sequential option in the advanced section of backup-ng:
- **snapshot process** needs to be performed with the lowest concurrency possible. 2 is a good compromise: one snapshot is fast, but a stuck snapshot won't block the whole job. That's why a concurrency of 2 is not too bad on your storage. Basically, at 3 AM, we'll do all the VM snapshots needed, 2 at a time.
- **disk export process** is bottlenecked by XCP-ng/XenServer - so to get the most of it, you can use up to 12 in parallel. As soon a snapshot is done, the export process will start, until reaching 12 at once. Then as soon as one in those 12 is finished, another one will appear until there is nothing more to export.
- **VM export process:** the 12 disk export limit mentioned above applies to VDI exports, which happen during delta exports. For full VM exports (for example, for full backup job types), there is a built in limit of 2. This means if you have a full backup job of 6 VMs, only 2 will be exported at once.
- **snapshot deletion** can't happen all at once because the previous step durations are random - no need to implement concurrency on this one.
This is how it currently works in Xen Orchestra. But sometimes, you also want to have _sequential_ backups combined with the _parallel strategy_. That's why we introduced a sequential option in the advanced section of backup-ng:
> Note: 0 means it will be fully **parallel** for all VMs.
If you job contains 50 VMs for example, you could specify a sequential backup with a limit of "25 at once" (enter 25 in the concurrency field). This means at 3 AM, we'll do 25 snapshots (2 at a time), then exports. As soon as the first VM backup is completely finished (snapshot removed), then we'll start the 26th and so on, to always keep a max of 25x VM backups going in parallel.
If you job contains 50 VMs for example, you could specify a sequential backup with a limit of "25 at once" (enter 25 in the concurrency field). This means at 3 AM, we'll do 25 snapshots (2 at a time), then exports. As soon as the first VM backup is completely finished (snapshot removed), then we'll start the 26th and so on, to always keep a max of 25x VM backups going in parallel.
**Warning!** A non-privileged user requires the use of ``sudo`` to mount NFS shares. See [installation from the sources](from_the_sources.md).
**Warning!** A non-privileged user requires the use of `sudo` to mount NFS shares. See [installation from the sources](from_the_sources.md).
### HTTP listen address and port
By default, XO-server listens on all addresses (0.0.0.0) and runs on port 80. If you need to, you can change this in the `# Basic HTTP` section:
```toml
host = '0.0.0.0'
hostname= '0.0.0.0'
port=80
```
@@ -31,7 +31,7 @@ port = 80
XO-server can also run in HTTPS (you can run HTTP and HTTPS at the same time) - just modify what's needed in the `# Basic HTTPS` section, this time with the certificates/keys you need and their path:
```toml
host = '0.0.0.0'
hostname= '0.0.0.0'
port=443
certificate='./certificate.pem'
key='./key.pem'
@@ -43,10 +43,10 @@ key = './key.pem'
If you want to redirect everything to HTTPS, you can modify the configuration like this:
```
```toml
# If set to true, all HTTP traffic will be redirected to the first HTTPs configuration.
redirectToHttps: true
redirectToHttps = true
```
This should be written just before the `mount` option, inside the `http:` block.
@@ -77,6 +77,8 @@ Don't forget to reload `systemd` conf and restart `xo-server`:
# systemctl restart xo-server.service
```
**Note:** The `--use-openssl-ca` option is ignored by Node if Xen-Orchestra is run with Linux capabilities. Capabilities are commonly used to bind applications to privileged ports (<1024)(i.e.`CAP_NET_BIND_SERVICE`).LocalNATrules(`iptables`)orareverseproxywouldberequiredtouseprivilegedportsandacustomcertficateauthority.
> WARNING: it works only with XenServer 6.5 or later
This feature is a continuous replication system for your XenServer VMs **without any storage vendor lock-in**. You can replicate a VM every *X* minutes/hours to any storage repository. It could be to a distant XenServer host or just another local storage target.
This feature is a continuous replication system for your XenServer VMs **without any storage vendor lock-in**. You can replicate a VM every _X_ minutes/hours to any storage repository. It could be to a distant XenServer host or just another local storage target.
This feature covers multiple objectives:
* no storage vendor lock-in
* no configuration (agent-less)
* low Recovery Point Objective, from 10 minutes to 24 hours (or more)
* flexibility
* no intermediate storage needed
* atomic replication
* efficient DR (disaster recovery) process
- no storage vendor lock-in
- no configuration (agent-less)
- low Recovery Point Objective, from 10 minutes to 24 hours (or more)
- flexibility
- no intermediate storage needed
- atomic replication
- efficient DR (disaster recovery) process
If you lose your main pool, you can start the copy on the other side, with very recent data.
@@ -36,18 +36,25 @@ To protect the replication, we removed the possibility to boot your copied VM di
## Manual initial seed
**If you can't transfer the first backup through your network because it's too large**, you can make a seed locally. In order to do this, follow this procedure (until we make it accessible directly in XO).
**If you can't transfer the first backup through your network because it's too large**, you can make a seed locally. In order to do this, follow this procedure (until we make it accessible directly in XO).
> This is **only** if you need to make the initial copy without making the whole transfer through your network. Otherwise, **you don't need this**. These instructions are for Backup-NG jobs, and will not work to seed a legacy backup job. Please migrate any legacy jobs to Backup-NG!
### Job creation
Create the Continuous Replication backup job, and leave it disabled for now. On the main Backup-NG page, note its identifiers, the main `backupJobId` and the ID of one on the schedules for the job, `backupScheduleId`.
Create the Continuous Replication backup job, and leave it disabled for now. On the main Backup-NG page, copy the job's `backupJobId` by hovering to the left of the shortened ID and clicking the copy to clipboard button:

Copy it somewhere temporarily. Now we need to also copy the ID of the job schedule, `backupScheduleId`. Do this by hovering over the schedule name in the same panel as before, and clicking the copy to clipboard button. Keep it with the `backupJobId` you copied previously as we will need them all later:

### Seed creation
Manually create a snapshot on the VM to backup, and note its UUID as`snapshotUuid` from the snapshot panel for the VM.
Manually create a snapshot on the VM being backed up, then copy this snapshot UUID,`snapshotUuid` from the snapshot panel of the VM:

> DO NOT ever delete or alter this snapshot, feel free to rename it to make that clear.
@@ -55,7 +62,9 @@ Manually create a snapshot on the VM to backup, and note its UUID as `snapshotUu
Export this snapshot to a file, then import it on the target SR.
Note the UUID of this newly created VM as `targetVmUuid`.
We need to copy the UUID of this newly created VM as well,`targetVmUuid`:

> DO not start this VM or it will break the Continuous Replication job! You can rename this VM to more easily remember this.
@@ -66,7 +75,7 @@ The XOA backup system requires metadata to correctly associate the source snapsh
First install the tool (all the following is done from the XOA VM CLI):
```
npm i -g xo-cr-seed
sudo npm i -g --unsafe-perm @xen-orchestra/cr-seed-cli
```
Here is an example of how the utility expects the UUIDs and info passed to it:
Your backup job should now be working correctly! Manually run the job the first time to check if everything is OK. Then, enable the job. **Now, only the deltas are sent, your initial seed saved you a LOT of time if you have a slow network.**
### Failover process
In the situation where you need to failover to your destination host, you simply need to start all your VMs on the destination host.
> Note: If you want to start a VM on your destination host without breaking the CR jobs on the other side, you will need to make a copy of the VM and start the copy. Otherwise, you will be asked if you would like to force start the VMs.
> WARNING: Delta backups are only available on XenServer 6.5 or later
You can export only the delta (difference) between your current VM disks and a previous snapshot (called here the *reference*). They are called *continuous* because you'll **never export a full backup** after the first one.
You can export only the delta (difference) between your current VM disks and a previous snapshot (called here the _reference_). They are called _continuous_ because you'll **never export a full backup** after the first one.
## Introduction
@@ -10,16 +10,16 @@ Full backups can be represented like this:
You can imagine making your first initial full backup during a weekend, and then only delta backups every night. It combines the flexibility of snapshots and the power of full backups, because:
* delta are stored somewhere else than the current VM storage
* they are small
* quick to create
* easy to restore
- delta are stored somewhere else than the current VM storage
- they are small
- quick to create
- easy to restore
So, if you want to rollback your VM to a previous state, the cost is only one snapshot on your SR (far less than the [rolling snapshot](rolling_snapshot.md) mechanism).
@@ -41,6 +41,10 @@ This way we can go "forward" and remove this oldest VHD after the merge:
Just go into your "Backup" view, and select Delta Backup. Then, it's the same as a normal backup.
## Snapshots
Unlike other types of backup jobs which delete the associated snapshot when the job is done and it has been exported, delta backups always keep a snapshot of every VM in the backup job, and uses it for the delta. Do not delete these snapshots!
## Exclude disks
During a delta backup job, you can avoid saving all disks of the VM. To do that is trivial: just edit the VM disk name and add `[NOBAK]` before the current name, eg: `data-disk` will become `[NOBAK] data-disk` (with a space or not, doesn't matter).
> You need to be logged in to make a purchase. If you don't have an account, please [register here](https://xen-orchestra.com/#!/signup).
@@ -18,7 +17,7 @@ From your account page, click on the purchase menu, then select the edition you
## Purchase options
The second step is to select your purchase option:
The second step is to select your purchase option:
- Subscription: only available with a credit card payment. Choose this option for a monthly payment or a yearly payment **renewed automatically** each year.
@@ -32,16 +31,17 @@ Then you need to fill in your information and select **"Buy for my own use"** (d

## Billing information
You need to complete all the required information on this page in order to move forward.
> Note: If you are part of the Eurozone, you will need to provide a valid EU VAT number inorder to proceed to payment. Transactions between companies inside the Eurozone are VAT free.
Transactions outside the Eurozone are VAT free.
You need to complete all the required information on this page in order to move forward.
> Note: If you are part of the Eurozone, you will need to provide a valid EU VAT number in order to proceed to payment. Transactions between companies inside the Eurozone are VAT free.
> Transactions outside the Eurozone are VAT free.

## Select your payment mode
Credit Card, Wire transfer or Bank check are the three payment methods available on our store. Some methods can be unavailable regarding the purchase option you have selected during step one.
Credit Card, Wire transfer or Bank check are the three payment methods available on our store. Some methods can be unavailable regarding the purchase option you have selected during step one.
> Wire transfer is not available for monthly and yearly subscription - Credit Card is not available for paid period.
@@ -8,10 +8,10 @@ This category is dedicated to creating a VM with Docker support.
## Prerequisites
* XenServer 6.5 or higher
* Plugin installation (see below)
* CoreOS ISO ([download it here](http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso)) for CoreOS installations
* Xen Orchestra 4.10 or newer
- XenServer 6.5 or higher
- Plugin installation (see below)
- CoreOS ISO ([download it here](http://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso)) for CoreOS installations
- Xen Orchestra 4.10 or newer
## Docker plugin installation
@@ -49,8 +49,8 @@ That's it! You can now enjoy Docker support!
There are two ways to use the newly exposed Docker features:
* Install a CoreOS VM
* Transform an existing VM into a supported Docker VM
- Install a CoreOS VM
- Transform an existing VM into a supported Docker VM
### CoreOS
@@ -80,7 +80,6 @@ And replace it with your actual SSH public key:
`- ssh-rsa AAAA....kuGgQ me@mypc`
The rest of the configuration is identical to any other VM. Just click on "Create VM" and you are done. After a few seconds, your VM will be ready. Nothing else to do!
You can see it thanks to the docker logo in the main view:
@@ -165,10 +164,10 @@ During the VM creation, the XSContainer plugin will create an extra disk: "Autom
Basically, it reads configuration during the boot, allowing:
* SSH key management for newly created VM/instances
In our case, it's used by the XSContainer plugin to allow host communication to the Docker daemon running in the VM, thus exposing Docker commands outside of the VM.
@@ -178,9 +177,9 @@ You can also use the XSContainer plugin to "transform" an existing VM into a "Do
You need to have the following installed inside the VM:
* Docker
* openssh-server
* ncat
- Docker
- openssh-server
- ncat
For Debian/Ubuntu like distro: `apt-get install docker.io openssh-server nmap`. For RHEL and derived (CentOS...): `yum install docker openssh-server nmap-ncat`.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.