mirror of
https://github.com/libvirt/libvirt.git
synced 2025-02-25 18:55:26 -06:00
qemu: Allow all query commands to be run during long jobs
Query commands are safe to be called during long running jobs (such as migration). This patch makes them all work without the need to special-case every single one of them. The patch introduces new job.asyncCond condition and associated job.asyncJob which are dedicated to asynchronous (from qemu monitor point of view) jobs that can take arbitrarily long time to finish while qemu monitor is still usable for other commands. The existing job.active (and job.cond condition) is used all other synchronous jobs (including the commands run during async job). Locking schema is changed to use these two conditions. While asyncJob is active, only allowed set of synchronous jobs is allowed (the set can be different according to a particular asyncJob) so any method that communicates to qemu monitor needs to check if it is allowed to be executed during current asyncJob (if any). Once the check passes, the method needs to normally acquire job.cond to ensure no other command is running. Since domain object lock is released during that time, asyncJob could have been started in the meantime so the method needs to recheck the first condition. Then, normal jobs set job.active and asynchronous jobs set job.asyncJob and optionally change the list of allowed job groups. Since asynchronous jobs only set job.asyncJob, other allowed commands can still be run when domain object is unlocked (when communicating to remote libvirtd or sleeping). To protect its own internal synchronous commands, the asynchronous job needs to start a special nested job before entering qemu monitor. The nested job doesn't check asyncJob, it only acquires job.cond and sets job.active to block other jobs.
This commit is contained in:
parent
24f717ac22
commit
361842881e
@ -49,17 +49,39 @@ There are a number of locks on various objects
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
* qemuMonitorPrivatePtr: Job condition
|
* qemuMonitorPrivatePtr: Job conditions
|
||||||
|
|
||||||
Since virDomainObjPtr lock must not be held during sleeps, the job
|
Since virDomainObjPtr lock must not be held during sleeps, the job
|
||||||
condition provides additional protection for code making updates.
|
conditions provide additional protection for code making updates.
|
||||||
|
|
||||||
|
Qemu driver uses two kinds of job conditions: asynchronous and
|
||||||
|
normal.
|
||||||
|
|
||||||
|
Asynchronous job condition is used for long running jobs (such as
|
||||||
|
migration) that consist of several monitor commands and it is
|
||||||
|
desirable to allow calling a limited set of other monitor commands
|
||||||
|
while such job is running. This allows clients to, e.g., query
|
||||||
|
statistical data, cancel the job, or change parameters of the job.
|
||||||
|
|
||||||
|
Normal job condition is used by all other jobs to get exclusive
|
||||||
|
access to the monitor and also by every monitor command issued by an
|
||||||
|
asynchronous job. When acquiring normal job condition, the job must
|
||||||
|
specify what kind of action it is about to take and this is checked
|
||||||
|
against the allowed set of jobs in case an asynchronous job is
|
||||||
|
running. If the job is incompatible with current asynchronous job,
|
||||||
|
it needs to wait until the asynchronous job ends and try to acquire
|
||||||
|
the job again.
|
||||||
|
|
||||||
Immediately after acquiring the virDomainObjPtr lock, any method
|
Immediately after acquiring the virDomainObjPtr lock, any method
|
||||||
which intends to update state must acquire the job condition. The
|
which intends to update state must acquire either asynchronous or
|
||||||
virDomainObjPtr lock is released while blocking on this condition
|
normal job condition. The virDomainObjPtr lock is released while
|
||||||
variable. Once the job condition is acquired, a method can safely
|
blocking on these condition variables. Once the job condition is
|
||||||
release the virDomainObjPtr lock whenever it hits a piece of code
|
acquired, a method can safely release the virDomainObjPtr lock
|
||||||
which may sleep/wait, and re-acquire it after the sleep/wait.
|
whenever it hits a piece of code which may sleep/wait, and
|
||||||
|
re-acquire it after the sleep/wait. Whenever an asynchronous job
|
||||||
|
wants to talk to the monitor, it needs to acquire nested job (a
|
||||||
|
special kind of normla job) to obtain exclusive access to the
|
||||||
|
monitor.
|
||||||
|
|
||||||
Since the virDomainObjPtr lock was dropped while waiting for the
|
Since the virDomainObjPtr lock was dropped while waiting for the
|
||||||
job condition, it is possible that the domain is no longer active
|
job condition, it is possible that the domain is no longer active
|
||||||
@ -111,31 +133,74 @@ To lock the virDomainObjPtr
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
To acquire the job mutex
|
To acquire the normal job condition
|
||||||
|
|
||||||
qemuDomainObjBeginJob() (if driver is unlocked)
|
qemuDomainObjBeginJob() (if driver is unlocked)
|
||||||
- Increments ref count on virDomainObjPtr
|
- Increments ref count on virDomainObjPtr
|
||||||
- Wait qemuDomainObjPrivate condition 'jobActive != 0' using
|
- Waits until the job is compatible with current async job or no
|
||||||
virDomainObjPtr mutex
|
async job is running
|
||||||
- Sets jobActive to 1
|
- Waits job.cond condition 'job.active != 0' using virDomainObjPtr
|
||||||
|
mutex
|
||||||
|
- Rechecks if the job is still compatible and repeats waiting if it
|
||||||
|
isn't
|
||||||
|
- Sets job.active to the job type
|
||||||
|
|
||||||
qemuDomainObjBeginJobWithDriver() (if driver needs to be locked)
|
qemuDomainObjBeginJobWithDriver() (if driver needs to be locked)
|
||||||
- Unlocks driver
|
|
||||||
- Increments ref count on virDomainObjPtr
|
- Increments ref count on virDomainObjPtr
|
||||||
- Wait qemuDomainObjPrivate condition 'jobActive != 0' using
|
- Unlocks driver
|
||||||
virDomainObjPtr mutex
|
- Waits until the job is compatible with current async job or no
|
||||||
- Sets jobActive to 1
|
async job is running
|
||||||
|
- Waits job.cond condition 'job.active != 0' using virDomainObjPtr
|
||||||
|
mutex
|
||||||
|
- Rechecks if the job is still compatible and repeats waiting if it
|
||||||
|
isn't
|
||||||
|
- Sets job.active to the job type
|
||||||
- Unlocks virDomainObjPtr
|
- Unlocks virDomainObjPtr
|
||||||
- Locks driver
|
- Locks driver
|
||||||
- Locks virDomainObjPtr
|
- Locks virDomainObjPtr
|
||||||
|
|
||||||
NB: this variant is required in order to comply with lock ordering rules
|
NB: this variant is required in order to comply with lock ordering
|
||||||
for virDomainObjPtr vs driver
|
rules for virDomainObjPtr vs driver
|
||||||
|
|
||||||
|
|
||||||
qemuDomainObjEndJob()
|
qemuDomainObjEndJob()
|
||||||
- Set jobActive to 0
|
- Sets job.active to 0
|
||||||
- Signal on qemuDomainObjPrivate condition
|
- Signals on job.cond condition
|
||||||
|
- Decrements ref count on virDomainObjPtr
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
To acquire the asynchronous job condition
|
||||||
|
|
||||||
|
qemuDomainObjBeginAsyncJob() (if driver is unlocked)
|
||||||
|
- Increments ref count on virDomainObjPtr
|
||||||
|
- Waits until no async job is running
|
||||||
|
- Waits job.cond condition 'job.active != 0' using virDomainObjPtr
|
||||||
|
mutex
|
||||||
|
- Rechecks if any async job was started while waiting on job.cond
|
||||||
|
and repeats waiting in that case
|
||||||
|
- Sets job.asyncJob to the asynchronous job type
|
||||||
|
|
||||||
|
qemuDomainObjBeginAsyncJobWithDriver() (if driver needs to be locked)
|
||||||
|
- Increments ref count on virDomainObjPtr
|
||||||
|
- Unlocks driver
|
||||||
|
- Waits until no async job is running
|
||||||
|
- Waits job.cond condition 'job.active != 0' using virDomainObjPtr
|
||||||
|
mutex
|
||||||
|
- Rechecks if any async job was started while waiting on job.cond
|
||||||
|
and repeats waiting in that case
|
||||||
|
- Sets job.asyncJob to the asynchronous job type
|
||||||
|
- Unlocks virDomainObjPtr
|
||||||
|
- Locks driver
|
||||||
|
- Locks virDomainObjPtr
|
||||||
|
|
||||||
|
NB: this variant is required in order to comply with lock ordering
|
||||||
|
rules for virDomainObjPtr vs driver
|
||||||
|
|
||||||
|
|
||||||
|
qemuDomainObjEndAsyncJob()
|
||||||
|
- Sets job.asyncJob to 0
|
||||||
|
- Broadcasts on job.asyncCond condition
|
||||||
- Decrements ref count on virDomainObjPtr
|
- Decrements ref count on virDomainObjPtr
|
||||||
|
|
||||||
|
|
||||||
@ -152,6 +217,11 @@ To acquire the QEMU monitor lock
|
|||||||
|
|
||||||
NB: caller must take care to drop the driver lock if necessary
|
NB: caller must take care to drop the driver lock if necessary
|
||||||
|
|
||||||
|
These functions automatically begin/end nested job if called inside an
|
||||||
|
asynchronous job. The caller must then check the return value of
|
||||||
|
qemuDomainObjEnterMonitor to detect if domain died while waiting on
|
||||||
|
the nested job.
|
||||||
|
|
||||||
|
|
||||||
To acquire the QEMU monitor lock with the driver lock held
|
To acquire the QEMU monitor lock with the driver lock held
|
||||||
|
|
||||||
@ -167,6 +237,11 @@ To acquire the QEMU monitor lock with the driver lock held
|
|||||||
|
|
||||||
NB: caller must take care to drop the driver lock if necessary
|
NB: caller must take care to drop the driver lock if necessary
|
||||||
|
|
||||||
|
These functions automatically begin/end nested job if called inside an
|
||||||
|
asynchronous job. The caller must then check the return value of
|
||||||
|
qemuDomainObjEnterMonitorWithDriver to detect if domain died while
|
||||||
|
waiting on the nested job.
|
||||||
|
|
||||||
|
|
||||||
To keep a domain alive while waiting on a remote command, starting
|
To keep a domain alive while waiting on a remote command, starting
|
||||||
with the driver lock held
|
with the driver lock held
|
||||||
@ -232,7 +307,7 @@ Design patterns
|
|||||||
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
qemuDomainObjBeginJob(obj);
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
||||||
|
|
||||||
...do work...
|
...do work...
|
||||||
|
|
||||||
@ -253,12 +328,12 @@ Design patterns
|
|||||||
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
qemuDomainObjBeginJob(obj);
|
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
|
||||||
|
|
||||||
...do prep work...
|
...do prep work...
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjEnterMonitor(obj);
|
ignore_value(qemuDomainObjEnterMonitor(obj));
|
||||||
qemuMonitorXXXX(priv->mon);
|
qemuMonitorXXXX(priv->mon);
|
||||||
qemuDomainObjExitMonitor(obj);
|
qemuDomainObjExitMonitor(obj);
|
||||||
}
|
}
|
||||||
@ -280,12 +355,12 @@ Design patterns
|
|||||||
qemuDriverLock(driver);
|
qemuDriverLock(driver);
|
||||||
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
||||||
|
|
||||||
qemuDomainObjBeginJobWithDriver(obj);
|
qemuDomainObjBeginJobWithDriver(obj, QEMU_JOB_TYPE);
|
||||||
|
|
||||||
...do prep work...
|
...do prep work...
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, obj);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, obj));
|
||||||
qemuMonitorXXXX(priv->mon);
|
qemuMonitorXXXX(priv->mon);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
||||||
}
|
}
|
||||||
@ -297,7 +372,7 @@ Design patterns
|
|||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
|
|
||||||
* Coordinating with a remote server for migraion
|
* Running asynchronous job
|
||||||
|
|
||||||
virDomainObjPtr obj;
|
virDomainObjPtr obj;
|
||||||
qemuDomainObjPrivatePtr priv;
|
qemuDomainObjPrivatePtr priv;
|
||||||
@ -305,7 +380,47 @@ Design patterns
|
|||||||
qemuDriverLock(driver);
|
qemuDriverLock(driver);
|
||||||
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
||||||
|
|
||||||
qemuDomainObjBeginJobWithDriver(obj);
|
qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
|
||||||
|
qemuDomainObjSetAsyncJobMask(obj, allowedJobs);
|
||||||
|
|
||||||
|
...do prep work...
|
||||||
|
|
||||||
|
if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
|
||||||
|
/* domain died in the meantime */
|
||||||
|
goto error;
|
||||||
|
}
|
||||||
|
...start qemu job...
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
||||||
|
|
||||||
|
while (!finished) {
|
||||||
|
if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
|
||||||
|
/* domain died in the meantime */
|
||||||
|
goto error;
|
||||||
|
}
|
||||||
|
...monitor job progress...
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, obj);
|
||||||
|
|
||||||
|
virDomainObjUnlock(obj);
|
||||||
|
sleep(aWhile);
|
||||||
|
virDomainObjLock(obj);
|
||||||
|
}
|
||||||
|
|
||||||
|
...do final work...
|
||||||
|
|
||||||
|
qemuDomainObjEndAsyncJob(obj);
|
||||||
|
virDomainObjUnlock(obj);
|
||||||
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
|
|
||||||
|
* Coordinating with a remote server for migration
|
||||||
|
|
||||||
|
virDomainObjPtr obj;
|
||||||
|
qemuDomainObjPrivatePtr priv;
|
||||||
|
|
||||||
|
qemuDriverLock(driver);
|
||||||
|
obj = virDomainFindByUUID(driver->domains, dom->uuid);
|
||||||
|
|
||||||
|
qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
|
||||||
|
|
||||||
...do prep work...
|
...do prep work...
|
||||||
|
|
||||||
@ -322,7 +437,7 @@ Design patterns
|
|||||||
|
|
||||||
...do final work...
|
...do final work...
|
||||||
|
|
||||||
qemuDomainObjEndJob(obj);
|
qemuDomainObjEndAsyncJob(obj);
|
||||||
virDomainObjUnlock(obj);
|
virDomainObjUnlock(obj);
|
||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
|
@ -87,8 +87,14 @@ qemuDomainObjInitJob(qemuDomainObjPrivatePtr priv)
|
|||||||
if (virCondInit(&priv->job.cond) < 0)
|
if (virCondInit(&priv->job.cond) < 0)
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
|
if (virCondInit(&priv->job.asyncCond) < 0) {
|
||||||
|
ignore_value(virCondDestroy(&priv->job.cond));
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
if (virCondInit(&priv->job.signalCond) < 0) {
|
if (virCondInit(&priv->job.signalCond) < 0) {
|
||||||
ignore_value(virCondDestroy(&priv->job.cond));
|
ignore_value(virCondDestroy(&priv->job.cond));
|
||||||
|
ignore_value(virCondDestroy(&priv->job.asyncCond));
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -101,6 +107,15 @@ qemuDomainObjResetJob(qemuDomainObjPrivatePtr priv)
|
|||||||
struct qemuDomainJobObj *job = &priv->job;
|
struct qemuDomainJobObj *job = &priv->job;
|
||||||
|
|
||||||
job->active = QEMU_JOB_NONE;
|
job->active = QEMU_JOB_NONE;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv)
|
||||||
|
{
|
||||||
|
struct qemuDomainJobObj *job = &priv->job;
|
||||||
|
|
||||||
|
job->asyncJob = QEMU_ASYNC_JOB_NONE;
|
||||||
|
job->mask = DEFAULT_JOB_MASK;
|
||||||
job->start = 0;
|
job->start = 0;
|
||||||
memset(&job->info, 0, sizeof(job->info));
|
memset(&job->info, 0, sizeof(job->info));
|
||||||
job->signals = 0;
|
job->signals = 0;
|
||||||
@ -111,6 +126,7 @@ static void
|
|||||||
qemuDomainObjFreeJob(qemuDomainObjPrivatePtr priv)
|
qemuDomainObjFreeJob(qemuDomainObjPrivatePtr priv)
|
||||||
{
|
{
|
||||||
ignore_value(virCondDestroy(&priv->job.cond));
|
ignore_value(virCondDestroy(&priv->job.cond));
|
||||||
|
ignore_value(virCondDestroy(&priv->job.asyncCond));
|
||||||
ignore_value(virCondDestroy(&priv->job.signalCond));
|
ignore_value(virCondDestroy(&priv->job.signalCond));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -509,12 +525,31 @@ qemuDomainObjSetJob(virDomainObjPtr obj,
|
|||||||
}
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
qemuDomainObjDiscardJob(virDomainObjPtr obj)
|
qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj,
|
||||||
|
unsigned long long allowedJobs)
|
||||||
{
|
{
|
||||||
qemuDomainObjPrivatePtr priv = obj->privateData;
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
qemuDomainObjResetJob(priv);
|
if (!priv->job.asyncJob)
|
||||||
qemuDomainObjSetJob(obj, QEMU_JOB_NONE);
|
return;
|
||||||
|
|
||||||
|
priv->job.mask = allowedJobs | JOB_MASK(QEMU_JOB_DESTROY);
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
qemuDomainObjDiscardAsyncJob(virDomainObjPtr obj)
|
||||||
|
{
|
||||||
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
|
if (priv->job.active == QEMU_JOB_ASYNC_NESTED)
|
||||||
|
qemuDomainObjResetJob(priv);
|
||||||
|
qemuDomainObjResetAsyncJob(priv);
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool
|
||||||
|
qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, enum qemuDomainJob job)
|
||||||
|
{
|
||||||
|
return !priv->job.asyncJob || (priv->job.mask & JOB_MASK(job)) != 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Give up waiting for mutex after 30 seconds */
|
/* Give up waiting for mutex after 30 seconds */
|
||||||
@ -527,11 +562,14 @@ qemuDomainObjDiscardJob(virDomainObjPtr obj)
|
|||||||
static int
|
static int
|
||||||
qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
|
qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
|
||||||
bool driver_locked,
|
bool driver_locked,
|
||||||
virDomainObjPtr obj)
|
virDomainObjPtr obj,
|
||||||
|
enum qemuDomainJob job,
|
||||||
|
enum qemuDomainAsyncJob asyncJob)
|
||||||
{
|
{
|
||||||
qemuDomainObjPrivatePtr priv = obj->privateData;
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
unsigned long long now;
|
unsigned long long now;
|
||||||
unsigned long long then;
|
unsigned long long then;
|
||||||
|
bool nested = job == QEMU_JOB_ASYNC_NESTED;
|
||||||
|
|
||||||
if (virTimeMs(&now) < 0)
|
if (virTimeMs(&now) < 0)
|
||||||
return -1;
|
return -1;
|
||||||
@ -541,27 +579,31 @@ qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
|
|||||||
if (driver_locked)
|
if (driver_locked)
|
||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
while (priv->job.active) {
|
retry:
|
||||||
if (virCondWaitUntil(&priv->job.cond, &obj->lock, then) < 0) {
|
while (!nested && !qemuDomainJobAllowed(priv, job)) {
|
||||||
if (errno == ETIMEDOUT)
|
if (virCondWaitUntil(&priv->job.asyncCond, &obj->lock, then) < 0)
|
||||||
qemuReportError(VIR_ERR_OPERATION_TIMEOUT,
|
goto error;
|
||||||
"%s", _("cannot acquire state change lock"));
|
|
||||||
else
|
|
||||||
virReportSystemError(errno,
|
|
||||||
"%s", _("cannot acquire job mutex"));
|
|
||||||
if (driver_locked) {
|
|
||||||
virDomainObjUnlock(obj);
|
|
||||||
qemuDriverLock(driver);
|
|
||||||
virDomainObjLock(obj);
|
|
||||||
}
|
|
||||||
/* Safe to ignore value since ref count was incremented above */
|
|
||||||
ignore_value(virDomainObjUnref(obj));
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
while (priv->job.active) {
|
||||||
|
if (virCondWaitUntil(&priv->job.cond, &obj->lock, then) < 0)
|
||||||
|
goto error;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* No job is active but a new async job could have been started while obj
|
||||||
|
* was unlocked, so we need to recheck it. */
|
||||||
|
if (!nested && !qemuDomainJobAllowed(priv, job))
|
||||||
|
goto retry;
|
||||||
|
|
||||||
qemuDomainObjResetJob(priv);
|
qemuDomainObjResetJob(priv);
|
||||||
qemuDomainObjSetJob(obj, QEMU_JOB_UNSPECIFIED);
|
|
||||||
priv->job.start = now;
|
if (job != QEMU_JOB_ASYNC) {
|
||||||
|
priv->job.active = job;
|
||||||
|
} else {
|
||||||
|
qemuDomainObjResetAsyncJob(priv);
|
||||||
|
priv->job.asyncJob = asyncJob;
|
||||||
|
priv->job.start = now;
|
||||||
|
}
|
||||||
|
|
||||||
if (driver_locked) {
|
if (driver_locked) {
|
||||||
virDomainObjUnlock(obj);
|
virDomainObjUnlock(obj);
|
||||||
@ -570,6 +612,22 @@ qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
error:
|
||||||
|
if (errno == ETIMEDOUT)
|
||||||
|
qemuReportError(VIR_ERR_OPERATION_TIMEOUT,
|
||||||
|
"%s", _("cannot acquire state change lock"));
|
||||||
|
else
|
||||||
|
virReportSystemError(errno,
|
||||||
|
"%s", _("cannot acquire job mutex"));
|
||||||
|
if (driver_locked) {
|
||||||
|
virDomainObjUnlock(obj);
|
||||||
|
qemuDriverLock(driver);
|
||||||
|
virDomainObjLock(obj);
|
||||||
|
}
|
||||||
|
/* Safe to ignore value since ref count was incremented above */
|
||||||
|
ignore_value(virDomainObjUnref(obj));
|
||||||
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -581,9 +639,17 @@ qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
|
|||||||
* Upon successful return, the object will have its ref count increased,
|
* Upon successful return, the object will have its ref count increased,
|
||||||
* successful calls must be followed by EndJob eventually
|
* successful calls must be followed by EndJob eventually
|
||||||
*/
|
*/
|
||||||
int qemuDomainObjBeginJob(virDomainObjPtr obj)
|
int qemuDomainObjBeginJob(virDomainObjPtr obj, enum qemuDomainJob job)
|
||||||
{
|
{
|
||||||
return qemuDomainObjBeginJobInternal(NULL, false, obj);
|
return qemuDomainObjBeginJobInternal(NULL, false, obj, job,
|
||||||
|
QEMU_ASYNC_JOB_NONE);
|
||||||
|
}
|
||||||
|
|
||||||
|
int qemuDomainObjBeginAsyncJob(virDomainObjPtr obj,
|
||||||
|
enum qemuDomainAsyncJob asyncJob)
|
||||||
|
{
|
||||||
|
return qemuDomainObjBeginJobInternal(NULL, false, obj, QEMU_JOB_ASYNC,
|
||||||
|
asyncJob);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -597,9 +663,49 @@ int qemuDomainObjBeginJob(virDomainObjPtr obj)
|
|||||||
* successful calls must be followed by EndJob eventually
|
* successful calls must be followed by EndJob eventually
|
||||||
*/
|
*/
|
||||||
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
|
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj)
|
virDomainObjPtr obj,
|
||||||
|
enum qemuDomainJob job)
|
||||||
{
|
{
|
||||||
return qemuDomainObjBeginJobInternal(driver, true, obj);
|
if (job <= QEMU_JOB_NONE || job >= QEMU_JOB_ASYNC) {
|
||||||
|
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
|
||||||
|
_("Attempt to start invalid job"));
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return qemuDomainObjBeginJobInternal(driver, true, obj, job,
|
||||||
|
QEMU_ASYNC_JOB_NONE);
|
||||||
|
}
|
||||||
|
|
||||||
|
int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
|
||||||
|
virDomainObjPtr obj,
|
||||||
|
enum qemuDomainAsyncJob asyncJob)
|
||||||
|
{
|
||||||
|
return qemuDomainObjBeginJobInternal(driver, true, obj, QEMU_JOB_ASYNC,
|
||||||
|
asyncJob);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Use this to protect monitor sections within active async job.
|
||||||
|
*
|
||||||
|
* The caller must call qemuDomainObjBeginAsyncJob{,WithDriver} before it can
|
||||||
|
* use this method. Never use this method if you only own non-async job, use
|
||||||
|
* qemuDomainObjBeginJob{,WithDriver} instead.
|
||||||
|
*/
|
||||||
|
int
|
||||||
|
qemuDomainObjBeginNestedJob(virDomainObjPtr obj)
|
||||||
|
{
|
||||||
|
return qemuDomainObjBeginJobInternal(NULL, false, obj,
|
||||||
|
QEMU_JOB_ASYNC_NESTED,
|
||||||
|
QEMU_ASYNC_JOB_NONE);
|
||||||
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
|
||||||
|
virDomainObjPtr obj)
|
||||||
|
{
|
||||||
|
return qemuDomainObjBeginJobInternal(driver, true, obj,
|
||||||
|
QEMU_JOB_ASYNC_NESTED,
|
||||||
|
QEMU_ASYNC_JOB_NONE);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -616,25 +722,60 @@ int qemuDomainObjEndJob(virDomainObjPtr obj)
|
|||||||
qemuDomainObjPrivatePtr priv = obj->privateData;
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
qemuDomainObjResetJob(priv);
|
qemuDomainObjResetJob(priv);
|
||||||
qemuDomainObjSetJob(obj, QEMU_JOB_NONE);
|
|
||||||
virCondSignal(&priv->job.cond);
|
virCondSignal(&priv->job.cond);
|
||||||
|
|
||||||
return virDomainObjUnref(obj);
|
return virDomainObjUnref(obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
qemuDomainObjEndAsyncJob(virDomainObjPtr obj)
|
||||||
|
{
|
||||||
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
static void
|
qemuDomainObjResetAsyncJob(priv);
|
||||||
|
virCondBroadcast(&priv->job.asyncCond);
|
||||||
|
|
||||||
|
return virDomainObjUnref(obj);
|
||||||
|
}
|
||||||
|
|
||||||
|
void
|
||||||
|
qemuDomainObjEndNestedJob(virDomainObjPtr obj)
|
||||||
|
{
|
||||||
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
|
qemuDomainObjResetJob(priv);
|
||||||
|
virCondSignal(&priv->job.cond);
|
||||||
|
|
||||||
|
/* safe to ignore since the surrounding async job increased the reference
|
||||||
|
* counter as well */
|
||||||
|
ignore_value(virDomainObjUnref(obj));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
static int
|
||||||
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
|
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj)
|
virDomainObjPtr obj)
|
||||||
{
|
{
|
||||||
qemuDomainObjPrivatePtr priv = obj->privateData;
|
qemuDomainObjPrivatePtr priv = obj->privateData;
|
||||||
|
|
||||||
|
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
|
||||||
|
if (qemuDomainObjBeginNestedJob(obj) < 0)
|
||||||
|
return -1;
|
||||||
|
if (!virDomainObjIsActive(obj)) {
|
||||||
|
qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
|
||||||
|
_("domain is no longer running"));
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
qemuMonitorLock(priv->mon);
|
qemuMonitorLock(priv->mon);
|
||||||
qemuMonitorRef(priv->mon);
|
qemuMonitorRef(priv->mon);
|
||||||
ignore_value(virTimeMs(&priv->monStart));
|
ignore_value(virTimeMs(&priv->monStart));
|
||||||
virDomainObjUnlock(obj);
|
virDomainObjUnlock(obj);
|
||||||
if (driver)
|
if (driver)
|
||||||
qemuDriverUnlock(driver);
|
qemuDriverUnlock(driver);
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
@ -657,20 +798,24 @@ qemuDomainObjExitMonitorInternal(struct qemud_driver *driver,
|
|||||||
if (refs == 0) {
|
if (refs == 0) {
|
||||||
priv->mon = NULL;
|
priv->mon = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (priv->job.active == QEMU_JOB_ASYNC_NESTED)
|
||||||
|
qemuDomainObjEndNestedJob(obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* obj must be locked before calling, qemud_driver must be unlocked
|
* obj must be locked before calling, qemud_driver must be unlocked
|
||||||
*
|
*
|
||||||
* To be called immediately before any QEMU monitor API call
|
* To be called immediately before any QEMU monitor API call
|
||||||
* Must have already called qemuDomainObjBeginJob(), and checked
|
* Must have already either called qemuDomainObjBeginJob() and checked
|
||||||
* that the VM is still active.
|
* that the VM is still active or called qemuDomainObjBeginAsyncJob, in which
|
||||||
|
* case this will call qemuDomainObjBeginNestedJob.
|
||||||
*
|
*
|
||||||
* To be followed with qemuDomainObjExitMonitor() once complete
|
* To be followed with qemuDomainObjExitMonitor() once complete
|
||||||
*/
|
*/
|
||||||
void qemuDomainObjEnterMonitor(virDomainObjPtr obj)
|
int qemuDomainObjEnterMonitor(virDomainObjPtr obj)
|
||||||
{
|
{
|
||||||
qemuDomainObjEnterMonitorInternal(NULL, obj);
|
return qemuDomainObjEnterMonitorInternal(NULL, obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* obj must NOT be locked before calling, qemud_driver must be unlocked
|
/* obj must NOT be locked before calling, qemud_driver must be unlocked
|
||||||
@ -686,14 +831,16 @@ void qemuDomainObjExitMonitor(virDomainObjPtr obj)
|
|||||||
* obj must be locked before calling, qemud_driver must be locked
|
* obj must be locked before calling, qemud_driver must be locked
|
||||||
*
|
*
|
||||||
* To be called immediately before any QEMU monitor API call
|
* To be called immediately before any QEMU monitor API call
|
||||||
* Must have already called qemuDomainObjBeginJob().
|
* Must have already either called qemuDomainObjBeginJobWithDriver() and
|
||||||
|
* checked that the VM is still active or called qemuDomainObjBeginAsyncJob,
|
||||||
|
* in which case this will call qemuDomainObjBeginNestedJobWithDriver.
|
||||||
*
|
*
|
||||||
* To be followed with qemuDomainObjExitMonitorWithDriver() once complete
|
* To be followed with qemuDomainObjExitMonitorWithDriver() once complete
|
||||||
*/
|
*/
|
||||||
void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
|
int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj)
|
virDomainObjPtr obj)
|
||||||
{
|
{
|
||||||
qemuDomainObjEnterMonitorInternal(driver, obj);
|
return qemuDomainObjEnterMonitorInternal(driver, obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* obj must NOT be locked before calling, qemud_driver must be unlocked,
|
/* obj must NOT be locked before calling, qemud_driver must be unlocked,
|
||||||
|
@ -36,16 +36,35 @@
|
|||||||
(1 << VIR_DOMAIN_VIRT_KVM) | \
|
(1 << VIR_DOMAIN_VIRT_KVM) | \
|
||||||
(1 << VIR_DOMAIN_VIRT_XEN))
|
(1 << VIR_DOMAIN_VIRT_XEN))
|
||||||
|
|
||||||
|
# define JOB_MASK(job) (1 << (job - 1))
|
||||||
|
# define DEFAULT_JOB_MASK \
|
||||||
|
(JOB_MASK(QEMU_JOB_QUERY) | JOB_MASK(QEMU_JOB_DESTROY))
|
||||||
|
|
||||||
/* Only 1 job is allowed at any time
|
/* Only 1 job is allowed at any time
|
||||||
* A job includes *all* monitor commands, even those just querying
|
* A job includes *all* monitor commands, even those just querying
|
||||||
* information, not merely actions */
|
* information, not merely actions */
|
||||||
enum qemuDomainJob {
|
enum qemuDomainJob {
|
||||||
QEMU_JOB_NONE = 0, /* Always set to 0 for easy if (jobActive) conditions */
|
QEMU_JOB_NONE = 0, /* Always set to 0 for easy if (jobActive) conditions */
|
||||||
QEMU_JOB_UNSPECIFIED,
|
QEMU_JOB_QUERY, /* Doesn't change any state */
|
||||||
QEMU_JOB_MIGRATION_OUT,
|
QEMU_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */
|
||||||
QEMU_JOB_MIGRATION_IN,
|
QEMU_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */
|
||||||
QEMU_JOB_SAVE,
|
QEMU_JOB_MODIFY, /* May change state */
|
||||||
QEMU_JOB_DUMP,
|
|
||||||
|
/* The following two items must always be the last items */
|
||||||
|
QEMU_JOB_ASYNC, /* Asynchronous job */
|
||||||
|
QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */
|
||||||
|
};
|
||||||
|
|
||||||
|
/* Async job consists of a series of jobs that may change state. Independent
|
||||||
|
* jobs that do not change state (and possibly others if explicitly allowed by
|
||||||
|
* current async job) are allowed to be run even if async job is active.
|
||||||
|
*/
|
||||||
|
enum qemuDomainAsyncJob {
|
||||||
|
QEMU_ASYNC_JOB_NONE = 0,
|
||||||
|
QEMU_ASYNC_JOB_MIGRATION_OUT,
|
||||||
|
QEMU_ASYNC_JOB_MIGRATION_IN,
|
||||||
|
QEMU_ASYNC_JOB_SAVE,
|
||||||
|
QEMU_ASYNC_JOB_DUMP,
|
||||||
};
|
};
|
||||||
|
|
||||||
enum qemuDomainJobSignals {
|
enum qemuDomainJobSignals {
|
||||||
@ -69,14 +88,16 @@ struct qemuDomainJobSignalsData {
|
|||||||
};
|
};
|
||||||
|
|
||||||
struct qemuDomainJobObj {
|
struct qemuDomainJobObj {
|
||||||
virCond cond; /* Use in conjunction with main virDomainObjPtr lock */
|
virCond cond; /* Use to coordinate jobs */
|
||||||
|
enum qemuDomainJob active; /* Currently running job */
|
||||||
|
|
||||||
|
virCond asyncCond; /* Use to coordinate with async jobs */
|
||||||
|
enum qemuDomainAsyncJob asyncJob; /* Currently active async job */
|
||||||
|
unsigned long long mask; /* Jobs allowed during async job */
|
||||||
|
unsigned long long start; /* When the async job started */
|
||||||
|
virDomainJobInfo info; /* Async job progress data */
|
||||||
|
|
||||||
virCond signalCond; /* Use to coordinate the safe queries during migration */
|
virCond signalCond; /* Use to coordinate the safe queries during migration */
|
||||||
|
|
||||||
enum qemuDomainJob active; /* Currently running job */
|
|
||||||
|
|
||||||
unsigned long long start; /* When the job started */
|
|
||||||
virDomainJobInfo info; /* Progress data */
|
|
||||||
|
|
||||||
unsigned int signals; /* Signals for running job */
|
unsigned int signals; /* Signals for running job */
|
||||||
struct qemuDomainJobSignalsData signalsData; /* Signal specific data */
|
struct qemuDomainJobSignalsData signalsData; /* Signal specific data */
|
||||||
};
|
};
|
||||||
@ -124,18 +145,43 @@ void qemuDomainEventQueue(struct qemud_driver *driver,
|
|||||||
void qemuDomainSetPrivateDataHooks(virCapsPtr caps);
|
void qemuDomainSetPrivateDataHooks(virCapsPtr caps);
|
||||||
void qemuDomainSetNamespaceHooks(virCapsPtr caps);
|
void qemuDomainSetNamespaceHooks(virCapsPtr caps);
|
||||||
|
|
||||||
int qemuDomainObjBeginJob(virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
|
int qemuDomainObjBeginJob(virDomainObjPtr obj,
|
||||||
|
enum qemuDomainJob job)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
int qemuDomainObjBeginAsyncJob(virDomainObjPtr obj,
|
||||||
|
enum qemuDomainAsyncJob asyncJob)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
int qemuDomainObjBeginNestedJob(virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
|
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
|
virDomainObjPtr obj,
|
||||||
int qemuDomainObjEndJob(virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
|
enum qemuDomainJob job)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
|
||||||
|
virDomainObjPtr obj,
|
||||||
|
enum qemuDomainAsyncJob asyncJob)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
int qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
|
||||||
|
virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
|
||||||
|
int qemuDomainObjEndJob(virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
int qemuDomainObjEndAsyncJob(virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
|
void qemuDomainObjEndNestedJob(virDomainObjPtr obj);
|
||||||
|
|
||||||
void qemuDomainObjSetJob(virDomainObjPtr obj, enum qemuDomainJob job);
|
void qemuDomainObjSetJob(virDomainObjPtr obj, enum qemuDomainJob job);
|
||||||
void qemuDomainObjDiscardJob(virDomainObjPtr obj);
|
void qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj,
|
||||||
|
unsigned long long allowedJobs);
|
||||||
|
void qemuDomainObjDiscardAsyncJob(virDomainObjPtr obj);
|
||||||
|
|
||||||
void qemuDomainObjEnterMonitor(virDomainObjPtr obj);
|
int qemuDomainObjEnterMonitor(virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
void qemuDomainObjExitMonitor(virDomainObjPtr obj);
|
void qemuDomainObjExitMonitor(virDomainObjPtr obj);
|
||||||
void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
|
int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj);
|
virDomainObjPtr obj)
|
||||||
|
ATTRIBUTE_RETURN_CHECK;
|
||||||
void qemuDomainObjExitMonitorWithDriver(struct qemud_driver *driver,
|
void qemuDomainObjExitMonitorWithDriver(struct qemud_driver *driver,
|
||||||
virDomainObjPtr obj);
|
virDomainObjPtr obj);
|
||||||
void qemuDomainObjEnterRemoteWithDriver(struct qemud_driver *driver,
|
void qemuDomainObjEnterRemoteWithDriver(struct qemud_driver *driver,
|
||||||
|
@ -141,7 +141,8 @@ qemuAutostartDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *opaq
|
|||||||
|
|
||||||
virDomainObjLock(vm);
|
virDomainObjLock(vm);
|
||||||
virResetLastError();
|
virResetLastError();
|
||||||
if (qemuDomainObjBeginJobWithDriver(data->driver, vm) < 0) {
|
if (qemuDomainObjBeginJobWithDriver(data->driver, vm,
|
||||||
|
QEMU_JOB_MODIFY) < 0) {
|
||||||
err = virGetLastError();
|
err = virGetLastError();
|
||||||
VIR_ERROR(_("Failed to start job on VM '%s': %s"),
|
VIR_ERROR(_("Failed to start job on VM '%s': %s"),
|
||||||
vm->def->name,
|
vm->def->name,
|
||||||
@ -1279,7 +1280,7 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml,
|
|||||||
|
|
||||||
def = NULL;
|
def = NULL;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup; /* XXXX free the 'vm' we created ? */
|
goto cleanup; /* XXXX free the 'vm' we created ? */
|
||||||
|
|
||||||
if (qemuProcessStart(conn, driver, vm, NULL,
|
if (qemuProcessStart(conn, driver, vm, NULL,
|
||||||
@ -1348,7 +1349,7 @@ static int qemudDomainSuspend(virDomainPtr dom) {
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (priv->job.active == QEMU_JOB_MIGRATION_OUT) {
|
if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) {
|
||||||
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
|
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
|
||||||
VIR_DEBUG("Requesting domain pause on %s",
|
VIR_DEBUG("Requesting domain pause on %s",
|
||||||
vm->def->name);
|
vm->def->name);
|
||||||
@ -1357,7 +1358,7 @@ static int qemudDomainSuspend(virDomainPtr dom) {
|
|||||||
ret = 0;
|
ret = 0;
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
} else {
|
} else {
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_SUSPEND) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -1410,7 +1411,7 @@ static int qemudDomainResume(virDomainPtr dom) {
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -1466,7 +1467,7 @@ static int qemuDomainShutdown(virDomainPtr dom) {
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -1476,7 +1477,7 @@ static int qemuDomainShutdown(virDomainPtr dom) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
ret = qemuMonitorSystemPowerdown(priv->mon);
|
ret = qemuMonitorSystemPowerdown(priv->mon);
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
|
|
||||||
@ -1516,7 +1517,7 @@ static int qemuDomainReboot(virDomainPtr dom, unsigned int flags) {
|
|||||||
|
|
||||||
#if HAVE_YAJL
|
#if HAVE_YAJL
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MONITOR_JSON)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MONITOR_JSON)) {
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -1525,7 +1526,7 @@ static int qemuDomainReboot(virDomainPtr dom, unsigned int flags) {
|
|||||||
goto endjob;
|
goto endjob;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
ret = qemuMonitorSystemPowerdown(priv->mon);
|
ret = qemuMonitorSystemPowerdown(priv->mon);
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
|
|
||||||
@ -1576,7 +1577,7 @@ static int qemudDomainDestroy(virDomainPtr dom) {
|
|||||||
*/
|
*/
|
||||||
qemuProcessKill(vm);
|
qemuProcessKill(vm);
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_DESTROY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -1689,7 +1690,7 @@ static int qemudDomainSetMemoryFlags(virDomainPtr dom, unsigned long newmem,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
isActive = virDomainObjIsActive(vm);
|
isActive = virDomainObjIsActive(vm);
|
||||||
@ -1754,7 +1755,7 @@ static int qemudDomainSetMemoryFlags(virDomainPtr dom, unsigned long newmem,
|
|||||||
|
|
||||||
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
|
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
r = qemuMonitorSetBalloon(priv->mon, newmem);
|
r = qemuMonitorSetBalloon(priv->mon, newmem);
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
virDomainAuditMemory(vm, vm->def->mem.cur_balloon, newmem, "update",
|
virDomainAuditMemory(vm, vm->def->mem.cur_balloon, newmem, "update",
|
||||||
@ -1826,9 +1827,9 @@ static int qemuDomainInjectNMI(virDomainPtr domain, unsigned int flags)
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorInjectNMI(priv->mon);
|
ret = qemuMonitorInjectNMI(priv->mon);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
if (qemuDomainObjEndJob(vm) == 0) {
|
if (qemuDomainObjEndJob(vm) == 0) {
|
||||||
@ -1884,12 +1885,12 @@ static int qemudDomainGetInfo(virDomainPtr dom,
|
|||||||
(vm->def->memballoon->model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
|
(vm->def->memballoon->model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
|
||||||
info->memory = vm->def->mem.max_balloon;
|
info->memory = vm->def->mem.max_balloon;
|
||||||
} else if (!priv->job.active) {
|
} else if (!priv->job.active) {
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
if (!virDomainObjIsActive(vm))
|
if (!virDomainObjIsActive(vm))
|
||||||
err = 0;
|
err = 0;
|
||||||
else {
|
else {
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
|
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
}
|
}
|
||||||
@ -2127,11 +2128,10 @@ static int qemudDomainSaveFlag(struct qemud_driver *driver, virDomainPtr dom,
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
|
||||||
|
QEMU_ASYNC_JOB_SAVE) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_SAVE);
|
|
||||||
|
|
||||||
memset(&priv->job.info, 0, sizeof(priv->job.info));
|
memset(&priv->job.info, 0, sizeof(priv->job.info));
|
||||||
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
||||||
|
|
||||||
@ -2298,7 +2298,7 @@ static int qemudDomainSaveFlag(struct qemud_driver *driver, virDomainPtr dom,
|
|||||||
VIR_DOMAIN_EVENT_STOPPED,
|
VIR_DOMAIN_EVENT_STOPPED,
|
||||||
VIR_DOMAIN_EVENT_STOPPED_SAVED);
|
VIR_DOMAIN_EVENT_STOPPED_SAVED);
|
||||||
if (!vm->persistent) {
|
if (!vm->persistent) {
|
||||||
if (qemuDomainObjEndJob(vm) > 0)
|
if (qemuDomainObjEndAsyncJob(vm) > 0)
|
||||||
virDomainRemoveInactive(&driver->domains,
|
virDomainRemoveInactive(&driver->domains,
|
||||||
vm);
|
vm);
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
@ -2314,7 +2314,7 @@ endjob:
|
|||||||
VIR_WARN("Unable to resume guest CPUs after save failure");
|
VIR_WARN("Unable to resume guest CPUs after save failure");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (qemuDomainObjEndJob(vm) == 0)
|
if (qemuDomainObjEndAsyncJob(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2614,7 +2614,8 @@ static int qemudDomainCoreDump(virDomainPtr dom,
|
|||||||
}
|
}
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
|
||||||
|
QEMU_ASYNC_JOB_DUMP) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -2623,8 +2624,6 @@ static int qemudDomainCoreDump(virDomainPtr dom,
|
|||||||
goto endjob;
|
goto endjob;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_DUMP);
|
|
||||||
|
|
||||||
/* Migrate will always stop the VM, so the resume condition is
|
/* Migrate will always stop the VM, so the resume condition is
|
||||||
independent of whether the stop command is issued. */
|
independent of whether the stop command is issued. */
|
||||||
resume = virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING;
|
resume = virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING;
|
||||||
@ -2670,7 +2669,7 @@ endjob:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjEndJob(vm) == 0)
|
if (qemuDomainObjEndAsyncJob(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
else if ((ret == 0) && (flags & VIR_DUMP_CRASH) && !vm->persistent) {
|
else if ((ret == 0) && (flags & VIR_DUMP_CRASH) && !vm->persistent) {
|
||||||
virDomainRemoveInactive(&driver->domains,
|
virDomainRemoveInactive(&driver->domains,
|
||||||
@ -2714,7 +2713,7 @@ qemuDomainScreenshot(virDomainPtr dom,
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -2744,7 +2743,7 @@ qemuDomainScreenshot(virDomainPtr dom,
|
|||||||
|
|
||||||
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
|
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
|
||||||
|
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
if (qemuMonitorScreendump(priv->mon, tmp) < 0) {
|
if (qemuMonitorScreendump(priv->mon, tmp) < 0) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
goto endjob;
|
goto endjob;
|
||||||
@ -2799,7 +2798,8 @@ static void processWatchdogEvent(void *data, void *opaque)
|
|||||||
goto unlock;
|
goto unlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, wdEvent->vm) < 0) {
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, wdEvent->vm,
|
||||||
|
QEMU_ASYNC_JOB_DUMP) < 0) {
|
||||||
VIR_FREE(dumpfile);
|
VIR_FREE(dumpfile);
|
||||||
goto unlock;
|
goto unlock;
|
||||||
}
|
}
|
||||||
@ -2837,7 +2837,7 @@ endjob:
|
|||||||
/* Safe to ignore value since ref count was incremented in
|
/* Safe to ignore value since ref count was incremented in
|
||||||
* qemuProcessHandleWatchdog().
|
* qemuProcessHandleWatchdog().
|
||||||
*/
|
*/
|
||||||
ignore_value(qemuDomainObjEndJob(wdEvent->vm));
|
ignore_value(qemuDomainObjEndAsyncJob(wdEvent->vm));
|
||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
if (virDomainObjUnref(wdEvent->vm) > 0)
|
if (virDomainObjUnref(wdEvent->vm) > 0)
|
||||||
@ -2854,7 +2854,7 @@ static int qemudDomainHotplugVcpus(virDomainObjPtr vm, unsigned int nvcpus)
|
|||||||
int oldvcpus = vm->def->vcpus;
|
int oldvcpus = vm->def->vcpus;
|
||||||
int vcpus = oldvcpus;
|
int vcpus = oldvcpus;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
|
|
||||||
/* We need different branches here, because we want to offline
|
/* We need different branches here, because we want to offline
|
||||||
* in reverse order to onlining, so any partial fail leaves us in a
|
* in reverse order to onlining, so any partial fail leaves us in a
|
||||||
@ -2940,7 +2940,7 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_AFFECT_LIVE)) {
|
if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_AFFECT_LIVE)) {
|
||||||
@ -3762,7 +3762,7 @@ qemuDomainRestore(virConnectPtr conn,
|
|||||||
}
|
}
|
||||||
def = NULL;
|
def = NULL;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
ret = qemuDomainSaveImageStartVM(conn, driver, vm, &fd, &header, path);
|
ret = qemuDomainSaveImageStartVM(conn, driver, vm, &fd, &header, path);
|
||||||
@ -3856,10 +3856,10 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom,
|
|||||||
/* Don't delay if someone's using the monitor, just use
|
/* Don't delay if someone's using the monitor, just use
|
||||||
* existing most recent data instead */
|
* existing most recent data instead */
|
||||||
if (!priv->job.active) {
|
if (!priv->job.active) {
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
|
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
if (qemuDomainObjEndJob(vm) == 0) {
|
if (qemuDomainObjEndJob(vm) == 0) {
|
||||||
@ -4094,7 +4094,7 @@ qemudDomainStartWithFlags(virDomainPtr dom, unsigned int flags)
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
@ -4840,7 +4840,7 @@ qemuDomainModifyDeviceFlags(virDomainPtr dom, const char *xml,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
@ -6023,8 +6023,8 @@ qemudDomainBlockStats (virDomainPtr dom,
|
|||||||
}
|
}
|
||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
if ((priv->job.active == QEMU_JOB_MIGRATION_OUT)
|
if ((priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT)
|
||||||
|| (priv->job.active == QEMU_JOB_SAVE)) {
|
|| (priv->job.asyncJob == QEMU_ASYNC_JOB_SAVE)) {
|
||||||
virDomainObjRef(vm);
|
virDomainObjRef(vm);
|
||||||
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT)
|
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT)
|
||||||
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
|
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
|
||||||
@ -6040,7 +6040,7 @@ qemudDomainBlockStats (virDomainPtr dom,
|
|||||||
if (virDomainObjUnref(vm) == 0)
|
if (virDomainObjUnref(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
} else {
|
} else {
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -6049,7 +6049,7 @@ qemudDomainBlockStats (virDomainPtr dom,
|
|||||||
goto endjob;
|
goto endjob;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
|
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
|
||||||
disk->info.alias,
|
disk->info.alias,
|
||||||
&stats->rd_req,
|
&stats->rd_req,
|
||||||
@ -6152,12 +6152,12 @@ qemudDomainMemoryStats (virDomainPtr dom,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjPrivatePtr priv = vm->privateData;
|
qemuDomainObjPrivatePtr priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
ret = qemuMonitorGetMemoryStats(priv->mon, stats, nr_stats);
|
ret = qemuMonitorGetMemoryStats(priv->mon, stats, nr_stats);
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
} else {
|
} else {
|
||||||
@ -6276,7 +6276,7 @@ qemudDomainMemoryPeek (virDomainPtr dom,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -6300,7 +6300,7 @@ qemudDomainMemoryPeek (virDomainPtr dom,
|
|||||||
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
|
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
|
||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
if (flags == VIR_MEMORY_VIRTUAL) {
|
if (flags == VIR_MEMORY_VIRTUAL) {
|
||||||
if (qemuMonitorSaveVirtualMemory(priv->mon, offset, size, tmp) < 0) {
|
if (qemuMonitorSaveVirtualMemory(priv->mon, offset, size, tmp) < 0) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
@ -6470,8 +6470,8 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
|
|||||||
virDomainObjIsActive(vm)) {
|
virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjPrivatePtr priv = vm->privateData;
|
qemuDomainObjPrivatePtr priv = vm->privateData;
|
||||||
|
|
||||||
if ((priv->job.active == QEMU_JOB_MIGRATION_OUT)
|
if ((priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT)
|
||||||
|| (priv->job.active == QEMU_JOB_SAVE)) {
|
|| (priv->job.asyncJob == QEMU_ASYNC_JOB_SAVE)) {
|
||||||
virDomainObjRef(vm);
|
virDomainObjRef(vm);
|
||||||
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO)
|
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO)
|
||||||
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
|
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
|
||||||
@ -6487,11 +6487,11 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
|
|||||||
if (virDomainObjUnref(vm) == 0)
|
if (virDomainObjUnref(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
} else {
|
} else {
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjEnterMonitor(vm);
|
ignore_value(qemuDomainObjEnterMonitor(vm));
|
||||||
ret = qemuMonitorGetBlockExtent(priv->mon,
|
ret = qemuMonitorGetBlockExtent(priv->mon,
|
||||||
disk->info.alias,
|
disk->info.alias,
|
||||||
&info->allocation);
|
&info->allocation);
|
||||||
@ -7100,7 +7100,7 @@ qemuDomainMigrateConfirm3(virDomainPtr domain,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
ret = qemuMigrationConfirm(driver, domain->conn, vm,
|
ret = qemuMigrationConfirm(driver, domain->conn, vm,
|
||||||
@ -7310,7 +7310,7 @@ static int qemuDomainGetJobInfo(virDomainPtr dom,
|
|||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
if (priv->job.active) {
|
if (priv->job.asyncJob) {
|
||||||
memcpy(info, &priv->job.info, sizeof(*info));
|
memcpy(info, &priv->job.info, sizeof(*info));
|
||||||
|
|
||||||
/* Refresh elapsed time again just to ensure it
|
/* Refresh elapsed time again just to ensure it
|
||||||
@ -7360,7 +7360,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) {
|
|||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
if (priv->job.active) {
|
if (priv->job.asyncJob) {
|
||||||
VIR_DEBUG("Requesting cancellation of job on vm %s", vm->def->name);
|
VIR_DEBUG("Requesting cancellation of job on vm %s", vm->def->name);
|
||||||
priv->job.signals |= QEMU_JOB_SIGNAL_CANCEL;
|
priv->job.signals |= QEMU_JOB_SIGNAL_CANCEL;
|
||||||
} else {
|
} else {
|
||||||
@ -7414,7 +7414,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom,
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (priv->job.active != QEMU_JOB_MIGRATION_OUT) {
|
if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) {
|
||||||
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
||||||
"%s", _("domain is not being migrated"));
|
"%s", _("domain is not being migrated"));
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
@ -7463,7 +7463,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom,
|
|||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
|
|
||||||
if (priv->job.active != QEMU_JOB_MIGRATION_OUT) {
|
if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) {
|
||||||
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
||||||
"%s", _("domain is not being migrated"));
|
"%s", _("domain is not being migrated"));
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
@ -7656,7 +7656,7 @@ qemuDomainSnapshotCreateActive(virConnectPtr conn,
|
|||||||
bool resume = false;
|
bool resume = false;
|
||||||
int ret = -1;
|
int ret = -1;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
|
if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
|
||||||
@ -7675,7 +7675,7 @@ qemuDomainSnapshotCreateActive(virConnectPtr conn,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
|
ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
|
||||||
@ -8001,7 +8001,7 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
|
|||||||
|
|
||||||
vm->current_snapshot = snap;
|
vm->current_snapshot = snap;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (snap->def->state == VIR_DOMAIN_RUNNING
|
if (snap->def->state == VIR_DOMAIN_RUNNING
|
||||||
@ -8009,7 +8009,7 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
|
|||||||
|
|
||||||
if (virDomainObjIsActive(vm)) {
|
if (virDomainObjIsActive(vm)) {
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
rc = qemuMonitorLoadSnapshot(priv->mon, snap->def->name);
|
rc = qemuMonitorLoadSnapshot(priv->mon, snap->def->name);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
@ -8133,7 +8133,7 @@ static int qemuDomainSnapshotDiscard(struct qemud_driver *driver,
|
|||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
/* we continue on even in the face of error */
|
/* we continue on even in the face of error */
|
||||||
qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
|
qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -8272,7 +8272,7 @@ static int qemuDomainSnapshotDelete(virDomainSnapshotPtr snapshot,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (flags & VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN) {
|
if (flags & VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN) {
|
||||||
@ -8341,9 +8341,9 @@ static int qemuDomainMonitorCommand(virDomainPtr domain, const char *cmd,
|
|||||||
|
|
||||||
hmp = !!(flags & VIR_DOMAIN_QEMU_MONITOR_COMMAND_HMP);
|
hmp = !!(flags & VIR_DOMAIN_QEMU_MONITOR_COMMAND_HMP);
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorArbitraryCommand(priv->mon, cmd, result, hmp);
|
ret = qemuMonitorArbitraryCommand(priv->mon, cmd, result, hmp);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
if (qemuDomainObjEndJob(vm) == 0) {
|
if (qemuDomainObjEndJob(vm) == 0) {
|
||||||
@ -8414,7 +8414,7 @@ static virDomainPtr qemuDomainAttach(virConnectPtr conn,
|
|||||||
|
|
||||||
def = NULL;
|
def = NULL;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (qemuProcessAttach(conn, driver, vm, pid,
|
if (qemuProcessAttach(conn, driver, vm, pid,
|
||||||
|
@ -96,7 +96,7 @@ int qemuDomainChangeEjectableMedia(struct qemud_driver *driver,
|
|||||||
if (!(driveAlias = qemuDeviceDriveHostAlias(origdisk, priv->qemuCaps)))
|
if (!(driveAlias = qemuDeviceDriveHostAlias(origdisk, priv->qemuCaps)))
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (disk->src) {
|
if (disk->src) {
|
||||||
const char *format = NULL;
|
const char *format = NULL;
|
||||||
if (disk->type != VIR_DOMAIN_DISK_TYPE_DIR) {
|
if (disk->type != VIR_DOMAIN_DISK_TYPE_DIR) {
|
||||||
@ -198,7 +198,7 @@ int qemuDomainAttachPciDiskDevice(struct qemud_driver *driver,
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
@ -295,7 +295,7 @@ int qemuDomainAttachPciControllerDevice(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
ret = qemuMonitorAddDevice(priv->mon, devstr);
|
ret = qemuMonitorAddDevice(priv->mon, devstr);
|
||||||
} else {
|
} else {
|
||||||
@ -440,7 +440,7 @@ int qemuDomainAttachSCSIDisk(struct qemud_driver *driver,
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
@ -542,7 +542,7 @@ int qemuDomainAttachUsbMassstorageDevice(struct qemud_driver *driver,
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
ret = qemuMonitorAddDrive(priv->mon, drivestr);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
@ -675,7 +675,7 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_NETDEV) &&
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_NETDEV) &&
|
||||||
qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
if (qemuMonitorAddNetdev(priv->mon, netstr, tapfd, tapfd_name,
|
if (qemuMonitorAddNetdev(priv->mon, netstr, tapfd, tapfd_name,
|
||||||
@ -711,7 +711,7 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
|
|||||||
goto try_remove;
|
goto try_remove;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
if (qemuMonitorAddDevice(priv->mon, nicstr) < 0) {
|
if (qemuMonitorAddDevice(priv->mon, nicstr) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -767,7 +767,7 @@ try_remove:
|
|||||||
char *netdev_name;
|
char *netdev_name;
|
||||||
if (virAsprintf(&netdev_name, "host%s", net->info.alias) < 0)
|
if (virAsprintf(&netdev_name, "host%s", net->info.alias) < 0)
|
||||||
goto no_memory;
|
goto no_memory;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorRemoveNetdev(priv->mon, netdev_name) < 0)
|
if (qemuMonitorRemoveNetdev(priv->mon, netdev_name) < 0)
|
||||||
VIR_WARN("Failed to remove network backend for netdev %s",
|
VIR_WARN("Failed to remove network backend for netdev %s",
|
||||||
netdev_name);
|
netdev_name);
|
||||||
@ -780,7 +780,7 @@ try_remove:
|
|||||||
char *hostnet_name;
|
char *hostnet_name;
|
||||||
if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0)
|
if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0)
|
||||||
goto no_memory;
|
goto no_memory;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
|
if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
|
||||||
VIR_WARN("Failed to remove network backend for vlan %d, net %s",
|
VIR_WARN("Failed to remove network backend for vlan %d, net %s",
|
||||||
vlan, hostnet_name);
|
vlan, hostnet_name);
|
||||||
@ -841,14 +841,14 @@ int qemuDomainAttachHostPciDevice(struct qemud_driver *driver,
|
|||||||
priv->qemuCaps)))
|
priv->qemuCaps)))
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorAddDeviceWithFd(priv->mon, devstr,
|
ret = qemuMonitorAddDeviceWithFd(priv->mon, devstr,
|
||||||
configfd, configfd_name);
|
configfd, configfd_name);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
} else {
|
} else {
|
||||||
virDomainDevicePCIAddress guestAddr;
|
virDomainDevicePCIAddress guestAddr;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorAddPCIHostDevice(priv->mon,
|
ret = qemuMonitorAddPCIHostDevice(priv->mon,
|
||||||
&hostdev->source.subsys.u.pci,
|
&hostdev->source.subsys.u.pci,
|
||||||
&guestAddr);
|
&guestAddr);
|
||||||
@ -929,7 +929,7 @@ int qemuDomainAttachHostUsbDevice(struct qemud_driver *driver,
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE))
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE))
|
||||||
ret = qemuMonitorAddDevice(priv->mon, devstr);
|
ret = qemuMonitorAddDevice(priv->mon, devstr);
|
||||||
else
|
else
|
||||||
@ -1242,7 +1242,7 @@ int qemuDomainDetachPciDiskDevice(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
@ -1338,7 +1338,7 @@ int qemuDomainDetachDiskDevice(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
virDomainAuditDisk(vm, detach, NULL, "detach", false);
|
virDomainAuditDisk(vm, detach, NULL, "detach", false);
|
||||||
@ -1476,7 +1476,7 @@ int qemuDomainDetachPciControllerDevice(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
if (qemuMonitorDelDevice(priv->mon, detach->info.alias)) {
|
if (qemuMonitorDelDevice(priv->mon, detach->info.alias)) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
@ -1571,7 +1571,7 @@ int qemuDomainDetachNetDevice(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
|
||||||
qemuDomainObjExitMonitor(vm);
|
qemuDomainObjExitMonitor(vm);
|
||||||
@ -1706,7 +1706,7 @@ int qemuDomainDetachHostPciDevice(struct qemud_driver *driver,
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
|
||||||
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
|
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
|
||||||
} else {
|
} else {
|
||||||
@ -1809,7 +1809,7 @@ int qemuDomainDetachHostUsbDevice(struct qemud_driver *driver,
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
|
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
virDomainAuditHostdev(vm, detach, "detach", ret == 0);
|
virDomainAuditHostdev(vm, detach, "detach", ret == 0);
|
||||||
@ -1888,7 +1888,7 @@ qemuDomainChangeGraphicsPasswords(struct qemud_driver *driver,
|
|||||||
if (auth->connected)
|
if (auth->connected)
|
||||||
connected = virDomainGraphicsAuthConnectedTypeToString(auth->connected);
|
connected = virDomainGraphicsAuthConnectedTypeToString(auth->connected);
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorSetPassword(priv->mon,
|
ret = qemuMonitorSetPassword(priv->mon,
|
||||||
type,
|
type,
|
||||||
auth->passwd ? auth->passwd : defaultPasswd,
|
auth->passwd ? auth->passwd : defaultPasswd,
|
||||||
|
@ -749,9 +749,11 @@ qemuMigrationProcessJobSignals(struct qemud_driver *driver,
|
|||||||
if (priv->job.signals & QEMU_JOB_SIGNAL_CANCEL) {
|
if (priv->job.signals & QEMU_JOB_SIGNAL_CANCEL) {
|
||||||
priv->job.signals ^= QEMU_JOB_SIGNAL_CANCEL;
|
priv->job.signals ^= QEMU_JOB_SIGNAL_CANCEL;
|
||||||
VIR_DEBUG("Cancelling job at client request");
|
VIR_DEBUG("Cancelling job at client request");
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorMigrateCancel(priv->mon);
|
if (ret == 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
ret = qemuMonitorMigrateCancel(priv->mon);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
VIR_WARN("Unable to cancel job");
|
VIR_WARN("Unable to cancel job");
|
||||||
}
|
}
|
||||||
@ -766,9 +768,11 @@ qemuMigrationProcessJobSignals(struct qemud_driver *driver,
|
|||||||
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
|
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
|
||||||
priv->job.signalsData.migrateDowntime = 0;
|
priv->job.signalsData.migrateDowntime = 0;
|
||||||
VIR_DEBUG("Setting migration downtime to %llums", ms);
|
VIR_DEBUG("Setting migration downtime to %llums", ms);
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorSetMigrationDowntime(priv->mon, ms);
|
if (ret == 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
ret = qemuMonitorSetMigrationDowntime(priv->mon, ms);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
VIR_WARN("Unable to set migration downtime");
|
VIR_WARN("Unable to set migration downtime");
|
||||||
} else if (priv->job.signals & QEMU_JOB_SIGNAL_MIGRATE_SPEED) {
|
} else if (priv->job.signals & QEMU_JOB_SIGNAL_MIGRATE_SPEED) {
|
||||||
@ -777,21 +781,25 @@ qemuMigrationProcessJobSignals(struct qemud_driver *driver,
|
|||||||
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_SPEED;
|
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_SPEED;
|
||||||
priv->job.signalsData.migrateBandwidth = 0;
|
priv->job.signalsData.migrateBandwidth = 0;
|
||||||
VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth);
|
VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth);
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth);
|
if (ret == 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
VIR_WARN("Unable to set migration speed");
|
VIR_WARN("Unable to set migration speed");
|
||||||
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT) {
|
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT) {
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
|
if (ret == 0) {
|
||||||
priv->job.signalsData.statDevName,
|
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
|
||||||
&priv->job.signalsData.blockStat->rd_req,
|
priv->job.signalsData.statDevName,
|
||||||
&priv->job.signalsData.blockStat->rd_bytes,
|
&priv->job.signalsData.blockStat->rd_req,
|
||||||
&priv->job.signalsData.blockStat->wr_req,
|
&priv->job.signalsData.blockStat->rd_bytes,
|
||||||
&priv->job.signalsData.blockStat->wr_bytes,
|
&priv->job.signalsData.blockStat->wr_req,
|
||||||
&priv->job.signalsData.blockStat->errs);
|
&priv->job.signalsData.blockStat->wr_bytes,
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
&priv->job.signalsData.blockStat->errs);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
|
|
||||||
*priv->job.signalsData.statRetCode = ret;
|
*priv->job.signalsData.statRetCode = ret;
|
||||||
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKSTAT;
|
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKSTAT;
|
||||||
@ -799,11 +807,13 @@ qemuMigrationProcessJobSignals(struct qemud_driver *driver,
|
|||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
VIR_WARN("Unable to get block statistics");
|
VIR_WARN("Unable to get block statistics");
|
||||||
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO) {
|
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO) {
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorGetBlockExtent(priv->mon,
|
if (ret == 0) {
|
||||||
priv->job.signalsData.infoDevName,
|
ret = qemuMonitorGetBlockExtent(priv->mon,
|
||||||
&priv->job.signalsData.blockInfo->allocation);
|
priv->job.signalsData.infoDevName,
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
&priv->job.signalsData.blockInfo->allocation);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
|
|
||||||
*priv->job.signalsData.infoRetCode = ret;
|
*priv->job.signalsData.infoRetCode = ret;
|
||||||
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKINFO;
|
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKINFO;
|
||||||
@ -836,13 +846,15 @@ qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
|
|||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorGetMigrationStatus(priv->mon,
|
if (ret == 0) {
|
||||||
&status,
|
ret = qemuMonitorGetMigrationStatus(priv->mon,
|
||||||
&memProcessed,
|
&status,
|
||||||
&memRemaining,
|
&memProcessed,
|
||||||
&memTotal);
|
&memRemaining,
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
&memTotal);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
|
|
||||||
if (ret < 0 || virTimeMs(&priv->job.info.timeElapsed) < 0) {
|
if (ret < 0 || virTimeMs(&priv->job.info.timeElapsed) < 0) {
|
||||||
priv->job.info.type = VIR_DOMAIN_JOB_FAILED;
|
priv->job.info.type = VIR_DOMAIN_JOB_FAILED;
|
||||||
@ -897,14 +909,14 @@ qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm)
|
|||||||
qemuDomainObjPrivatePtr priv = vm->privateData;
|
qemuDomainObjPrivatePtr priv = vm->privateData;
|
||||||
const char *job;
|
const char *job;
|
||||||
|
|
||||||
switch (priv->job.active) {
|
switch (priv->job.asyncJob) {
|
||||||
case QEMU_JOB_MIGRATION_OUT:
|
case QEMU_ASYNC_JOB_MIGRATION_OUT:
|
||||||
job = _("migration job");
|
job = _("migration job");
|
||||||
break;
|
break;
|
||||||
case QEMU_JOB_SAVE:
|
case QEMU_ASYNC_JOB_SAVE:
|
||||||
job = _("domain save job");
|
job = _("domain save job");
|
||||||
break;
|
break;
|
||||||
case QEMU_JOB_DUMP:
|
case QEMU_ASYNC_JOB_DUMP:
|
||||||
job = _("domain core dump job");
|
job = _("domain core dump job");
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -969,14 +981,16 @@ qemuDomainMigrateGraphicsRelocate(struct qemud_driver *driver,
|
|||||||
if (cookie->graphics->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
|
if (cookie->graphics->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorGraphicsRelocate(priv->mon,
|
if (ret == 0) {
|
||||||
cookie->graphics->type,
|
ret = qemuMonitorGraphicsRelocate(priv->mon,
|
||||||
cookie->remoteHostname,
|
cookie->graphics->type,
|
||||||
cookie->graphics->port,
|
cookie->remoteHostname,
|
||||||
cookie->graphics->tlsPort,
|
cookie->graphics->port,
|
||||||
cookie->graphics->tlsSubject);
|
cookie->graphics->tlsPort,
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
cookie->graphics->tlsSubject);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@ -1110,9 +1124,9 @@ qemuMigrationPrepareTunnel(struct qemud_driver *driver,
|
|||||||
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
|
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
|
||||||
|
QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
|
|
||||||
|
|
||||||
/* Domain starts inactive, even if the domain XML had an id field. */
|
/* Domain starts inactive, even if the domain XML had an id field. */
|
||||||
vm->def->id = -1;
|
vm->def->id = -1;
|
||||||
@ -1146,7 +1160,7 @@ qemuMigrationPrepareTunnel(struct qemud_driver *driver,
|
|||||||
virDomainAuditStart(vm, "migrated", false);
|
virDomainAuditStart(vm, "migrated", false);
|
||||||
qemuProcessStop(driver, vm, 0, VIR_DOMAIN_SHUTOFF_FAILED);
|
qemuProcessStop(driver, vm, 0, VIR_DOMAIN_SHUTOFF_FAILED);
|
||||||
if (!vm->persistent) {
|
if (!vm->persistent) {
|
||||||
if (qemuDomainObjEndJob(vm) > 0)
|
if (qemuDomainObjEndAsyncJob(vm) > 0)
|
||||||
virDomainRemoveInactive(&driver->domains, vm);
|
virDomainRemoveInactive(&driver->domains, vm);
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
}
|
}
|
||||||
@ -1175,7 +1189,7 @@ qemuMigrationPrepareTunnel(struct qemud_driver *driver,
|
|||||||
|
|
||||||
endjob:
|
endjob:
|
||||||
if (vm &&
|
if (vm &&
|
||||||
qemuDomainObjEndJob(vm) == 0)
|
qemuDomainObjEndAsyncJob(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
|
|
||||||
/* We set a fake job active which is held across
|
/* We set a fake job active which is held across
|
||||||
@ -1185,7 +1199,7 @@ endjob:
|
|||||||
*/
|
*/
|
||||||
if (vm &&
|
if (vm &&
|
||||||
virDomainObjIsActive(vm)) {
|
virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
|
priv->job.asyncJob = QEMU_ASYNC_JOB_MIGRATION_IN;
|
||||||
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
||||||
priv->job.start = now;
|
priv->job.start = now;
|
||||||
}
|
}
|
||||||
@ -1346,9 +1360,9 @@ qemuMigrationPrepareDirect(struct qemud_driver *driver,
|
|||||||
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
|
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
|
||||||
|
QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
|
|
||||||
|
|
||||||
/* Domain starts inactive, even if the domain XML had an id field. */
|
/* Domain starts inactive, even if the domain XML had an id field. */
|
||||||
vm->def->id = -1;
|
vm->def->id = -1;
|
||||||
@ -1364,7 +1378,7 @@ qemuMigrationPrepareDirect(struct qemud_driver *driver,
|
|||||||
* should have already done that.
|
* should have already done that.
|
||||||
*/
|
*/
|
||||||
if (!vm->persistent) {
|
if (!vm->persistent) {
|
||||||
if (qemuDomainObjEndJob(vm) > 0)
|
if (qemuDomainObjEndAsyncJob(vm) > 0)
|
||||||
virDomainRemoveInactive(&driver->domains, vm);
|
virDomainRemoveInactive(&driver->domains, vm);
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
}
|
}
|
||||||
@ -1397,7 +1411,7 @@ qemuMigrationPrepareDirect(struct qemud_driver *driver,
|
|||||||
|
|
||||||
endjob:
|
endjob:
|
||||||
if (vm &&
|
if (vm &&
|
||||||
qemuDomainObjEndJob(vm) == 0)
|
qemuDomainObjEndAsyncJob(vm) == 0)
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
|
|
||||||
/* We set a fake job active which is held across
|
/* We set a fake job active which is held across
|
||||||
@ -1407,7 +1421,7 @@ endjob:
|
|||||||
*/
|
*/
|
||||||
if (vm &&
|
if (vm &&
|
||||||
virDomainObjIsActive(vm)) {
|
virDomainObjIsActive(vm)) {
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
|
priv->job.asyncJob = QEMU_ASYNC_JOB_MIGRATION_IN;
|
||||||
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
|
||||||
priv->job.start = now;
|
priv->job.start = now;
|
||||||
}
|
}
|
||||||
@ -1491,7 +1505,9 @@ static int doNativeMigrate(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
|
||||||
|
goto cleanup;
|
||||||
|
|
||||||
if (resource > 0 &&
|
if (resource > 0 &&
|
||||||
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
|
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -1750,7 +1766,9 @@ static int doTunnelMigrate(struct qemud_driver *driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
|
||||||
|
goto cleanup;
|
||||||
|
|
||||||
if (resource > 0 &&
|
if (resource > 0 &&
|
||||||
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
|
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -1791,7 +1809,8 @@ static int doTunnelMigrate(struct qemud_driver *driver,
|
|||||||
/* it is also possible that the migrate didn't fail initially, but
|
/* it is also possible that the migrate didn't fail initially, but
|
||||||
* rather failed later on. Check the output of "info migrate"
|
* rather failed later on. Check the output of "info migrate"
|
||||||
*/
|
*/
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
|
||||||
|
goto cancel;
|
||||||
if (qemuMonitorGetMigrationStatus(priv->mon,
|
if (qemuMonitorGetMigrationStatus(priv->mon,
|
||||||
&status,
|
&status,
|
||||||
&transferred,
|
&transferred,
|
||||||
@ -1849,9 +1868,10 @@ cancel:
|
|||||||
if (ret != 0 && virDomainObjIsActive(vm)) {
|
if (ret != 0 && virDomainObjIsActive(vm)) {
|
||||||
VIR_FORCE_CLOSE(client_sock);
|
VIR_FORCE_CLOSE(client_sock);
|
||||||
VIR_FORCE_CLOSE(qemu_sock);
|
VIR_FORCE_CLOSE(qemu_sock);
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
if (qemuDomainObjEnterMonitorWithDriver(driver, vm) == 0) {
|
||||||
qemuMonitorMigrateCancel(priv->mon);
|
qemuMonitorMigrateCancel(priv->mon);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
cleanup:
|
cleanup:
|
||||||
@ -2287,9 +2307,9 @@ int qemuMigrationPerform(struct qemud_driver *driver,
|
|||||||
cookieout, cookieoutlen, flags, NULLSTR(dname),
|
cookieout, cookieoutlen, flags, NULLSTR(dname),
|
||||||
resource, v3proto);
|
resource, v3proto);
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
|
||||||
|
QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_OUT);
|
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
qemuReportError(VIR_ERR_OPERATION_INVALID,
|
||||||
@ -2368,7 +2388,7 @@ endjob:
|
|||||||
VIR_DOMAIN_EVENT_RESUMED_MIGRATED);
|
VIR_DOMAIN_EVENT_RESUMED_MIGRATED);
|
||||||
}
|
}
|
||||||
if (vm) {
|
if (vm) {
|
||||||
if (qemuDomainObjEndJob(vm) == 0) {
|
if (qemuDomainObjEndAsyncJob(vm) == 0) {
|
||||||
vm = NULL;
|
vm = NULL;
|
||||||
} else if (!virDomainObjIsActive(vm) &&
|
} else if (!virDomainObjIsActive(vm) &&
|
||||||
(!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) {
|
(!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) {
|
||||||
@ -2453,17 +2473,17 @@ qemuMigrationFinish(struct qemud_driver *driver,
|
|||||||
virErrorPtr orig_err = NULL;
|
virErrorPtr orig_err = NULL;
|
||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
if (priv->job.active != QEMU_JOB_MIGRATION_IN) {
|
if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN) {
|
||||||
qemuReportError(VIR_ERR_NO_DOMAIN,
|
qemuReportError(VIR_ERR_NO_DOMAIN,
|
||||||
_("domain '%s' is not processing incoming migration"), vm->def->name);
|
_("domain '%s' is not processing incoming migration"), vm->def->name);
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
qemuDomainObjDiscardJob(vm);
|
qemuDomainObjDiscardAsyncJob(vm);
|
||||||
|
|
||||||
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, 0)))
|
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, 0)))
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
|
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
/* Did the migration go as planned? If yes, return the domain
|
/* Did the migration go as planned? If yes, return the domain
|
||||||
@ -2744,7 +2764,9 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm,
|
|||||||
restoreLabel = true;
|
restoreLabel = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
|
||||||
|
goto cleanup;
|
||||||
|
|
||||||
if (!compressor) {
|
if (!compressor) {
|
||||||
const char *args[] = { "cat", NULL };
|
const char *args[] = { "cat", NULL };
|
||||||
|
|
||||||
|
@ -377,7 +377,7 @@ qemuProcessFakeReboot(void *opaque)
|
|||||||
VIR_DEBUG("vm=%p", vm);
|
VIR_DEBUG("vm=%p", vm);
|
||||||
qemuDriverLock(driver);
|
qemuDriverLock(driver);
|
||||||
virDomainObjLock(vm);
|
virDomainObjLock(vm);
|
||||||
if (qemuDomainObjBeginJob(vm) < 0)
|
if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
if (!virDomainObjIsActive(vm)) {
|
if (!virDomainObjIsActive(vm)) {
|
||||||
@ -386,7 +386,7 @@ qemuProcessFakeReboot(void *opaque)
|
|||||||
goto endjob;
|
goto endjob;
|
||||||
}
|
}
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorSystemReset(priv->mon) < 0) {
|
if (qemuMonitorSystemReset(priv->mon) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
goto endjob;
|
goto endjob;
|
||||||
@ -817,7 +817,7 @@ qemuConnectMonitor(struct qemud_driver *driver, virDomainObjPtr vm)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorSetCapabilities(priv->mon);
|
ret = qemuMonitorSetCapabilities(priv->mon);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
|
||||||
@ -1169,7 +1169,7 @@ qemuProcessWaitForMonitor(struct qemud_driver* driver,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
priv = vm->privateData;
|
priv = vm->privateData;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
|
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
|
||||||
@ -1224,7 +1224,7 @@ qemuProcessDetectVcpuPIDs(struct qemud_driver *driver,
|
|||||||
|
|
||||||
/* What follows is now all KVM specific */
|
/* What follows is now all KVM specific */
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) < 0) {
|
if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
return -1;
|
return -1;
|
||||||
@ -1518,7 +1518,7 @@ qemuProcessInitPasswords(virConnectPtr conn,
|
|||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
alias = vm->def->disks[i]->info.alias;
|
alias = vm->def->disks[i]->info.alias;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorSetDrivePassphrase(priv->mon, alias, secret);
|
ret = qemuMonitorSetDrivePassphrase(priv->mon, alias, secret);
|
||||||
VIR_FREE(secret);
|
VIR_FREE(secret);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -1909,7 +1909,7 @@ qemuProcessInitPCIAddresses(struct qemud_driver *driver,
|
|||||||
int ret;
|
int ret;
|
||||||
qemuMonitorPCIAddress *addrs = NULL;
|
qemuMonitorPCIAddress *addrs = NULL;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
naddrs = qemuMonitorGetAllPCIAddresses(priv->mon,
|
naddrs = qemuMonitorGetAllPCIAddresses(priv->mon,
|
||||||
&addrs);
|
&addrs);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
@ -2130,7 +2130,7 @@ qemuProcessStartCPUs(struct qemud_driver *driver, virDomainObjPtr vm,
|
|||||||
}
|
}
|
||||||
VIR_FREE(priv->lockState);
|
VIR_FREE(priv->lockState);
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorStartCPUs(priv->mon, conn);
|
ret = qemuMonitorStartCPUs(priv->mon, conn);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
|
||||||
@ -2158,9 +2158,11 @@ int qemuProcessStopCPUs(struct qemud_driver *driver, virDomainObjPtr vm,
|
|||||||
oldState = virDomainObjGetState(vm, &oldReason);
|
oldState = virDomainObjGetState(vm, &oldReason);
|
||||||
virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, reason);
|
virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, reason);
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
||||||
ret = qemuMonitorStopCPUs(priv->mon);
|
if (ret == 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
ret = qemuMonitorStopCPUs(priv->mon);
|
||||||
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
}
|
||||||
|
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0)
|
if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0)
|
||||||
@ -2206,7 +2208,7 @@ qemuProcessUpdateState(struct qemud_driver *driver, virDomainObjPtr vm)
|
|||||||
bool running;
|
bool running;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
ret = qemuMonitorGetStatus(priv->mon, &running);
|
ret = qemuMonitorGetStatus(priv->mon, &running);
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
|
|
||||||
@ -2252,6 +2254,9 @@ qemuProcessReconnect(void *payload, const void *name ATTRIBUTE_UNUSED, void *opa
|
|||||||
|
|
||||||
priv = obj->privateData;
|
priv = obj->privateData;
|
||||||
|
|
||||||
|
/* Set fake job so that EnterMonitor* doesn't want to start a new one */
|
||||||
|
priv->job.active = QEMU_JOB_MODIFY;
|
||||||
|
|
||||||
/* Hold an extra reference because we can't allow 'vm' to be
|
/* Hold an extra reference because we can't allow 'vm' to be
|
||||||
* deleted if qemuConnectMonitor() failed */
|
* deleted if qemuConnectMonitor() failed */
|
||||||
virDomainObjRef(obj);
|
virDomainObjRef(obj);
|
||||||
@ -2290,6 +2295,8 @@ qemuProcessReconnect(void *payload, const void *name ATTRIBUTE_UNUSED, void *opa
|
|||||||
if (qemuProcessFiltersInstantiate(conn, obj->def))
|
if (qemuProcessFiltersInstantiate(conn, obj->def))
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
|
priv->job.active = QEMU_JOB_NONE;
|
||||||
|
|
||||||
/* update domain state XML with possibly updated state in virDomainObj */
|
/* update domain state XML with possibly updated state in virDomainObj */
|
||||||
if (virDomainSaveStatus(driver->caps, driver->stateDir, obj) < 0)
|
if (virDomainSaveStatus(driver->caps, driver->stateDir, obj) < 0)
|
||||||
goto error;
|
goto error;
|
||||||
@ -2703,7 +2710,7 @@ int qemuProcessStart(virConnectPtr conn,
|
|||||||
|
|
||||||
VIR_DEBUG("Setting initial memory amount");
|
VIR_DEBUG("Setting initial memory amount");
|
||||||
cur_balloon = vm->def->mem.cur_balloon;
|
cur_balloon = vm->def->mem.cur_balloon;
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
|
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
@ -3117,7 +3124,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
|
|||||||
}
|
}
|
||||||
|
|
||||||
VIR_DEBUG("Getting initial memory amount");
|
VIR_DEBUG("Getting initial memory amount");
|
||||||
qemuDomainObjEnterMonitorWithDriver(driver, vm);
|
ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
|
||||||
if (qemuMonitorGetBalloonInfo(priv->mon, &vm->def->mem.cur_balloon) < 0) {
|
if (qemuMonitorGetBalloonInfo(priv->mon, &vm->def->mem.cur_balloon) < 0) {
|
||||||
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
qemuDomainObjExitMonitorWithDriver(driver, vm);
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
@ -3205,13 +3212,14 @@ static void qemuProcessAutoDestroyDom(void *payload,
|
|||||||
}
|
}
|
||||||
|
|
||||||
priv = dom->privateData;
|
priv = dom->privateData;
|
||||||
if (priv->job.active == QEMU_JOB_MIGRATION_IN) {
|
if (priv->job.asyncJob) {
|
||||||
VIR_DEBUG("vm=%s has incoming migration active, cancelling",
|
VIR_DEBUG("vm=%s has long-term job active, cancelling",
|
||||||
dom->def->name);
|
dom->def->name);
|
||||||
qemuDomainObjDiscardJob(dom);
|
qemuDomainObjDiscardAsyncJob(dom);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (qemuDomainObjBeginJobWithDriver(data->driver, dom) < 0)
|
if (qemuDomainObjBeginJobWithDriver(data->driver, dom,
|
||||||
|
QEMU_JOB_DESTROY) < 0)
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
|
||||||
VIR_DEBUG("Killing domain");
|
VIR_DEBUG("Killing domain");
|
||||||
|
Loading…
Reference in New Issue
Block a user