nodeset should be freed in both success and failure paths.
While tmppath is freed immediately after it's consumed, moving it from
error to cleanup label is a bit more consistent and robust.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Generally, we use "ret" variable for storing the value we are going to
return at the and of a function, but this is not the case in
qemuProcessStart. Let's rename "ret" as "rv".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart was passing char * migrateFrom as the third argument to
qemuPrepareNVRAM. We should explicitly convert the pointer to bool which
is what the function expects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This calls the PCI-, USB- and SCSI-specific functions just
like qemuHostdev{Prepare,ReAttach}DomainDevices() already do,
and was the missing piece for the qemuHostdev API to nicely
mirror the virHostdev API.
Update qemuProcessReconnect() to use the new function.
Adopt the same names used for virHostdevUpdateActive*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Adopt the same names used for virHostdevReAttach*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
Coverity notices that net->ifname is potentially referenced after a
VIR_FREE(). Since the net->ifname will eventually be free'd during
virDomainDefFree when calling virDomainNetDefFree, let's just that
processing take care the free.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since we'd disallow migration of a guest that would have possibly
invalid config but still be able to work, relax the WWN check to be
performed only on new starts of the VM.
Coverity complains that return from virHookCall is not checked in
one place in qemuProcessStop. Since the comment notes that we cannot
stop the operation even it if fails, just added the ignore_value.
So far we have the following pattern occurring over and over
again:
if (!vm->persistent)
qemuDomainRemoveInactive(driver, vm);
It's safe to put the check into the function and save some LoC.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Qemu unfortunately doesn't update internal state right after migration
and so the actual balloon size as returned by 'query-balloon' are
invalid for a while after the CPUs are started after migration. If we'd
refresh our internal state at this point we would report invalid current
memory size until the next balloon event would arrive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242940
Every single call to qemuDomainEventQueue() uses the following pattern:
if (event)
qemuDomainEventQueue(driver, event);
Let's move the check for valid event to qemuDomainEventQueue and
simplify all callers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit id 'f1f68ca33' added code to remove the directory paths for
auto-generated sockets, but that code could be called before the
paths were created resulting in generating error messages from
virFileDeleteTree indicating that the file doesn't exist.
Rather than "enforce" all callers to make the non-NULL and existence
checks, modify the virFileDeleteTree API to silently ignore NULL on
input and non-existent directory trees.
Commit f1f68ca334 did not report an error if virFileMakePath()
returned -1. Well, who would've guessed function with name starting
with 'vir' sets an errno instead of reporting an error the libvirt way.
Anyway, let's fix it, so the output changes from:
$ virsh start arm
error: Failed to start domain arm
error: An error occurred, but the cause is unknown
to:
$ virsh start arm
error: Failed to start domain arm
error: Cannot create directory '/var/lib/libvirt/qemu/domain-arm': Not
a directory
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit f1f68ca334 overused mdir_name()
event though it was not needed in the latest version, hence labelling
directory one level up in the tree and not the one it should.
If anyone with SElinux managed to try run a domain with guest agent set
up, it's highly possible that they will need to run 'restorecon -F
/var/lib/libvirt/qemu/channel/target' to fix what was done.
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We are automatically generating some socket paths for domains, but all
those paths end up in a directory that's the same for multiple domains.
The problem is that multiple domains can each run with different
seclabels (users, selinux contexts, etc.). The idea here is to create a
per-domain directory labelled in a way that each domain can access its
own unix sockets.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The virDomainObjListRemove() function unlocks a domain that it's given
due to legacy code. And because of that code, which should be
refactored, that last virObjectUnlock() cannot be just removed. So
instead, lock it right back for qemu for now. All calls to
qemuDomainRemoveInactive() are followed by code that unlocks the domain
again, plus the domain should be locked during qemuDomainObjEndJob(), so
the right place to lock it is right after virDomainObjListRemove().
The only place where this would cause a problem is the autodestroy
callback, so we need to get another reference there and uref+unlock it
afterwards. Luckily, returning NULL from that function doesn't mean an
error, and only means that it doesn't need to be unlocked anymore.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In addition to checking the current asynchronous job
qemuMigrationJobIsActive reports an error if the current job does not
match the one we asked for. Let's just check the job directly since we
are not interested in the error in qemuProcessHandleMonitorEOF.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In commit 641a145d73 I've added code that
resets the balloon memory value to full size prior to resuming the vCPUs
since the size certainly was not reduced at that point.
Since qemuProcessStart is used also in code paths with already booted
up guests (migration, save/restore) the assumption is not entirely true
since the guest might already been running before.
This patch adds a function that queries the monitor rather than using
the full size since a balloon event would not be reissued in case we are
recovering a saved migration state.
Additionally the new function is used also when reconnecting to a VM
after libvirtd restart since we might have missed a few balloon events
while libvirtd was not running.
After Jirka's migration patches libvirt is listening on migration
events from qemu instead of actively polling on the monitor. There is,
however, a little regression (introduced in 6d2edb6a42). The
problem is, the current status of migration job is updated in
qemuProcessHandleMigrationStatus if and only if migration job was
started. But eventually every asynchronous job may result in
migration. Therefore, since this job is not strictly a
migration job, internal state was not updated and later checks failed:
virsh # save fedora22 /tmp/fedora22_ble.save
error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
error: operation failed: domain save job: is not active
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If QEMU fails during incoming migration, the domain disappears including
a possibly useful error message read from QEMU log file. Let's remember
the error in virQEMUDriver so that Finish can report more than just "no
such domain".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since we already support the MIGRATION event, we just need to make sure
the domain condition is signalled whenever a p2p connection drops or the
domain is paused due to IO error and we can avoid waking up every 50 ms
to check whether something happened.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We don't need to call query-migrate every 50ms when we get the current
migration state via MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Even if QEMU supports migration events it doesn't send them by default.
We have to enable them by calling migrate-set-capabilities. Let's enable
migration events everytime we can and clear QEMU_CAPS_MIGRATION_EVENT in
case migrate-set-capabilities does not support events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some guests lock the tray and QEMU eject command will simply fail to
eject the media. But the guest OS can handle this attempt to eject the
media and can unlock the tray and open it. In this case, we should try
again to actually eject the media.
If the first attempt fails to detect a tray_open we will fail with
error, from monitor. If we receive that event, we know, that the guest
properly reacted to the eject request, unlocked the tray and opened it.
In this case, we need to run the command again to actually eject the
media from the device. The reason to call it again is, that QEMU
doesn't wait for the guest to react and report an error, that the tray
is locked.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1147471
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
After libvirt issues the balloon resize command, the current balloon
size needs to be changed to the maximum memory size since the vCPUs were
not started and thus the balloon driver could not return the memory.
Since GetXMLDesc and other APIs return the balloon size without updating
it in case they are not able to obtain the job and the memory balloon
does not support the asynchronous event the sizing might be incorrect.
Since the monitor code now supports ullongs when setting balloon size,
drop the legacy code with overflow checking.
Additionally the comment mentioning that the job is treated as a sync
job does not make sense any more since the monitor is entered
asynchronously.
Store the emulator pinning cpu mask as a pure virBitmap rather than the
virDomainPinDef since it stores only the bitmap and refactor
qemuDomainPinEmulator to do the same operations in a much saner way.
As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can
be removed since they don't add any value.
In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is
not guaranteed to be allocated to def->vcpus. This would cause a crash
for TCG since it does not report thread IDs for vCPUs.
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting a domain, if a domain specifies security drivers we do not have
loaded, we fail. However we don't check for this during
reconnect, so any operation relying on security driver functionality would fail.
If someone e.g. starts a domain with selinux driver loaded, then they change
the security driver to 'none' in config, restart the daemon and call dump/save/..,
QEMU will return an error.
As we shouldn't kill the domain, we should at least log an error to let the
user know that domain reconnect wasn't completely clean.
https://bugzilla.redhat.com/show_bug.cgi?id=1183893
So far, we are not reporting if numatune was even defined. The
value of zero is blindly returned (which maps onto
VIR_DOMAIN_NUMATUNE_MEM_STRICT). Unfortunately, we are making
decisions based on this value. Instead, we should not only return
the correct value, but report to the caller if the value is valid
at all.
For better viewing of this patch use '-w'.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that
they're woken up when the domain is stopped.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>