2011-02-14 10:09:39 -06:00
|
|
|
/*
|
|
|
|
* qemu_process.c: QEMU process management
|
|
|
|
*
|
qemu: new GRACEFUL flag for virDomainDestroy w/ QEMU support
When libvirt's virDomainDestroy API is shutting down the qemu process,
it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the
process still there, sends a SIGKILL.
There have been reports that this behavior can lead to data loss
because the guest running in qemu doesn't have time to flush its disk
cache buffers before it's unceremoniously whacked.
This patch maintains that default behavior, but provides a new flag
VIR_DOMAIN_DESTROY_GRACEFUL to alter the behavior. If this flag is set
in the call to virDomainDestroyFlags, SIGKILL will never be sent to
the qemu process; instead, if the timeout is reached and the qemu
process still exists, virDomainDestroy will return an error.
Once this patch is in, the recommended method for applications to call
virDomainDestroyFlags will be with VIR_DOMAIN_DESTROY_GRACEFUL
included. If that fails, then the application can decide if and when
to call virDomainDestroyFlags again without
VIR_DOMAIN_DESTROY_GRACEFUL (to force the issue with SIGKILL).
(Note that this does not address the issue of existing applications
that have not yet been modified to use VIR_DOMAIN_DESTROY_GRACEFUL.
That is a separate patch.)
2012-01-27 12:28:23 -06:00
|
|
|
* Copyright (C) 2006-2012 Red Hat, Inc.
|
2011-02-14 10:09:39 -06:00
|
|
|
*
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with this library; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __QEMU_PROCESS_H__
|
|
|
|
# define __QEMU_PROCESS_H__
|
|
|
|
|
|
|
|
# include "qemu_conf.h"
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 18:18:24 -05:00
|
|
|
# include "qemu_domain.h"
|
2011-02-14 10:09:39 -06:00
|
|
|
|
|
|
|
int qemuProcessPrepareMonitorChr(struct qemud_driver *driver,
|
|
|
|
virDomainChrSourceDefPtr monConfig,
|
|
|
|
const char *vm);
|
|
|
|
|
2011-05-04 04:07:01 -05:00
|
|
|
int qemuProcessStartCPUs(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
|
|
|
virConnectPtr conn,
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 18:18:24 -05:00
|
|
|
virDomainRunningReason reason,
|
|
|
|
enum qemuDomainAsyncJob asyncJob);
|
2011-05-04 04:07:01 -05:00
|
|
|
int qemuProcessStopCPUs(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
qemu: fix crash when mixing sync and async monitor jobs
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
2011-07-28 18:18:24 -05:00
|
|
|
virDomainPausedReason reason,
|
|
|
|
enum qemuDomainAsyncJob asyncJob);
|
2011-02-14 10:09:39 -06:00
|
|
|
|
|
|
|
void qemuProcessAutostartAll(struct qemud_driver *driver);
|
|
|
|
void qemuProcessReconnectAll(virConnectPtr conn, struct qemud_driver *driver);
|
|
|
|
|
|
|
|
int qemuProcessAssignPCIAddresses(virDomainDefPtr def);
|
|
|
|
|
|
|
|
int qemuProcessStart(virConnectPtr conn,
|
|
|
|
struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
|
|
|
const char *migrateFrom,
|
|
|
|
bool start_paused,
|
2011-06-23 04:37:57 -05:00
|
|
|
bool autodestroy,
|
2011-02-14 10:09:39 -06:00
|
|
|
int stdin_fd,
|
|
|
|
const char *stdin_path,
|
2011-08-25 15:44:48 -05:00
|
|
|
virDomainSnapshotObjPtr snapshot,
|
Rename Macvtap management APIs
In preparation for code re-organization, rename the Macvtap
management APIs to have the following patterns
virNetDevMacVLanXXXXX - macvlan/macvtap interface management
virNetDevVPortProfileXXXX - virtual port profile management
* src/util/macvtap.c, src/util/macvtap.h: Rename APIs
* src/conf/domain_conf.c, src/network/bridge_driver.c,
src/qemu/qemu_command.c, src/qemu/qemu_command.h,
src/qemu/qemu_driver.c, src/qemu/qemu_hotplug.c,
src/qemu/qemu_migration.c, src/qemu/qemu_process.c,
src/qemu/qemu_process.h: Update for renamed APIs
2011-11-02 11:51:01 -05:00
|
|
|
enum virNetDevVPortProfileOp vmop);
|
2011-02-14 10:09:39 -06:00
|
|
|
|
|
|
|
void qemuProcessStop(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
2011-05-04 04:07:01 -05:00
|
|
|
int migrated,
|
|
|
|
virDomainShutoffReason reason);
|
2011-02-14 10:09:39 -06:00
|
|
|
|
2011-05-05 11:32:21 -05:00
|
|
|
int qemuProcessAttach(virConnectPtr conn,
|
|
|
|
struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
|
|
|
int pid,
|
|
|
|
const char *pidfile,
|
|
|
|
virDomainChrSourceDefPtr monConfig,
|
|
|
|
bool monJSON);
|
|
|
|
|
qemu: new GRACEFUL flag for virDomainDestroy w/ QEMU support
When libvirt's virDomainDestroy API is shutting down the qemu process,
it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the
process still there, sends a SIGKILL.
There have been reports that this behavior can lead to data loss
because the guest running in qemu doesn't have time to flush its disk
cache buffers before it's unceremoniously whacked.
This patch maintains that default behavior, but provides a new flag
VIR_DOMAIN_DESTROY_GRACEFUL to alter the behavior. If this flag is set
in the call to virDomainDestroyFlags, SIGKILL will never be sent to
the qemu process; instead, if the timeout is reached and the qemu
process still exists, virDomainDestroy will return an error.
Once this patch is in, the recommended method for applications to call
virDomainDestroyFlags will be with VIR_DOMAIN_DESTROY_GRACEFUL
included. If that fails, then the application can decide if and when
to call virDomainDestroyFlags again without
VIR_DOMAIN_DESTROY_GRACEFUL (to force the issue with SIGKILL).
(Note that this does not address the issue of existing applications
that have not yet been modified to use VIR_DOMAIN_DESTROY_GRACEFUL.
That is a separate patch.)
2012-01-27 12:28:23 -06:00
|
|
|
typedef enum {
|
|
|
|
VIR_QEMU_PROCESS_KILL_FORCE = 1 << 0,
|
|
|
|
VIR_QEMU_PROCESS_KILL_NOWAIT = 1 << 1,
|
|
|
|
} virQemuProcessKillMode;
|
|
|
|
|
|
|
|
int qemuProcessKill(virDomainObjPtr vm, unsigned int flags);
|
2011-04-21 10:19:06 -05:00
|
|
|
|
2011-06-23 04:37:57 -05:00
|
|
|
int qemuProcessAutoDestroyInit(struct qemud_driver *driver);
|
|
|
|
void qemuProcessAutoDestroyRun(struct qemud_driver *driver,
|
|
|
|
virConnectPtr conn);
|
|
|
|
void qemuProcessAutoDestroyShutdown(struct qemud_driver *driver);
|
|
|
|
int qemuProcessAutoDestroyAdd(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm,
|
|
|
|
virConnectPtr conn);
|
|
|
|
int qemuProcessAutoDestroyRemove(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm);
|
2011-06-23 05:41:57 -05:00
|
|
|
bool qemuProcessAutoDestroyActive(struct qemud_driver *driver,
|
|
|
|
virDomainObjPtr vm);
|
2011-06-23 04:37:57 -05:00
|
|
|
|
2011-02-14 10:09:39 -06:00
|
|
|
#endif /* __QEMU_PROCESS_H__ */
|