Add the missing jump to thje error label. The error message shouldn't
ever be triggered though as it's called only on pre-selected nodes.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit b38da584 introduced two new functions to get a page size but it
won't work on Windows. We should take care of this.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit e562a61a introduced new function to get/set interface state but
there was misuse of ATTRIBUTE_NONNULL on non-pointer attributes and also
we need to wrap that functions by #ifdef to not break mingw build.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Some code paths have special logic depending on the page size
reported by sysconf, which in turn affects the test results.
We must mock this so tests always have a consistent page size.
e562a61a07 added these two new helper functions and only used them
within virnetdev.c, but declared them in the .h file. If some
currently unsupported interface flags need to be accessed in the
future, it will make more sense to write the appropriate higher level
function rather than require us to artificially define IFF_* on some
mythical platform that doesn't have SIOC[SG]IFFLAGS (and therefore
doesn't have IFF_*) just so we can call virNetDevSetIFFFlags() to
return an error.
To help someone in not going down the wrong road, this patch makes the
two helper functions static, hopefully making it less likely that
someone will want to use them outside of virnetdev.c.
https://bugzilla.redhat.com/show_bug.cgi?id=1176510
When storageDriverAutostart is called path virStateReload via a 'service
libvirtd reload', then because the volume list in the pool wasn't cleared
prior to the call, each volume would be listed multiple times (as many
times as we reload). I believe the issue would be introduced by commit
id '9e093f0b' at least for the libvirtd reload path, although I suppose
the introduction of virStateReload (commit id '70da0494') could be a
different cause.
Thus like other places prior to calling refreshPool, we need to call
virStoragePoolObjClearVols
Change done by commit f309db1f4d wrongly
assumes that qemu can start with a combination of NUMA nodes specified
with the "memdev" option and the appropriate backends, and the legacy
way by specifying only "mem" as a size argument. QEMU rejects such
commandline though:
$ /usr/bin/qemu-system-x86_64 -S -M pc -m 1024 -smp 2 \
-numa node,nodeid=0,cpus=0,mem=256 \
-object memory-backend-ram,id=ram-node1,size=12345 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1
qemu-system-x86_64: -numa node,nodeid=1,cpus=1,memdev=ram-node1: qemu: memdev option must be specified for either all or no nodes
To fix this issue we need to check if any of the nodes requires the new
definition with the backend and if so, then all other nodes have to use
it too.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1182467
With the new JSON to argv formatter we are now able to represent the
memory backend definitions in the JSON object format that is reusable
for monitor use (hotplug) and then convert it into the shell string.
This will avoid having two separate instances of the same code that
would create the different formats.
Previous refactors now allow to make this step without changes to the
test suite.
QEMU's command line visitor as well as the JSON interface take bytes by
default for memory object sizes. Convert mebibytes to bytes so that we
can later refactor the existing code for hotplug purposes.
QEMU's qapi visitor code allows yes/on/y for true and no/off/n for false
value of boolean properities. Unify the used style so that we can
generate it later and fix test cases.
Extract the memory backend device code into a separate function so that
it can be later easily refactored and reused.
Few small changes for future reusability, namely:
- new (currently unused) parameter for user specified page size
- size of the memory is specified in kibibytes, divided up in the
function
- new (currently unused) parameter for user specifed source nodeset
- option to enforce capability check
Unlike -device, qemu uses a JSON object to add backend "objects" via the
monitor rather than the string that would be passed on the commandline.
To be able to reuse code parts that configure backends for various
devices, this patch adds a helper that will allow generating the command
line representations from the JSON property object.
This helper eases iterating all key=value pairs stored in a JSON
object. Usually we pick only certain known keys from a JSON object, but
this will allow to walk complete objects and have the callback act on
those.
To be able to easily represent nodesets and other data stored in
virBitmaps in libvirt, this patch introduces a set of helpers that allow
to convert the bitmap to and from JSON value objects.
Extract the logic to determine which nodeset has to be used for a domain
from the formatting step so that it can be reused separately when the
nodeset is used in a different way.
The function is called from all {Attach,Update,Detach}Device APIs to
create config strings that are later passed to the xend to perform the
desired action. The function is intended to handle all supported
devices. However, as of 5b05358aba we
are trying to get disk driver of the device without checking if the
device really is a disk. This leads to an segmentation fault:
#0 0x00007ffff7571815 in virDomainDiskGetDriver () from /usr/lib/libvirt.so.0
#1 0x00007fffeb9ad471 in ?? () from /usr/lib/libvirt/connection-driver/libvirt_driver_xen.so
#2 0x00007fffeb9b1062 in xenDaemonAttachDeviceFlags () from /usr/lib/libvirt/connection-driver/libvirt_driver_xen.so
#3 0x00007fffeb9a8a86 in ?? () from /usr/lib/libvirt/connection-driver/libvirt_driver_xen.so
#4 0x00007ffff7609266 in virDomainAttachDevice () from /usr/lib/libvirt.so.0
#5 0x0000555555593c9d in ?? ()
#6 0x00007ffff76743c9 in virNetServerProgramDispatch () from /usr/lib/libvirt.so.0
#7 0x00005555555a678d in ?? ()
#8 0x00007ffff755460e in ?? () from /usr/lib/libvirt.so.0
#9 0x00007ffff7553b06 in ?? () from /usr/lib/libvirt.so.0
#10 0x00007ffff4998b50 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#11 0x00007ffff46e30ed in clone () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x0000000000000000 in ?? ()
Reported-by: Xiaolin Su <linxxnil@126.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1170492
In one of our previous commits (dc8b7ce7) we've done a functional
change even though it was intended as pure refactor. The problem is,
that the following XML:
<vcpu placement='static' current='2'>6</vcpu>
<cputune>
<emulatorpin cpuset='1-3'/>
</cputune>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
gets translated into this one:
<vcpu placement='auto' current='2'>6</vcpu>
<cputune>
<emulatorpin cpuset='1-3'/>
</cputune>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
We should not change the vcpu placement mode. Moreover, we're doing
something similar in case of emulatorpin and iothreadpin. If they were
set, but vcpu placement was auto, we've mistakenly removed them from
the domain XML even though we are able to set them independently on
vcpus.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch enables synchronization of the host macvtap
device options with the guest device's in response to the
NIC_RX_FILTER_CHANGED event.
The following device options will be synchronized:
* PROMISC
* MULTICAST
* ALLMULTI
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch provides the utility functions needed to synchronize
the rxfilter changes made to a guest domain with the corresponding
macvtap devices on the host:
* Get/set PROMISC flag
* Get/set ALLMULTI, MULTICAST
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1158034
If we're expecting to create a file somewhere and that fails for some
reason during qemuOpenFileAs, then we unlink the path we're attempting
to create leaving no way to determine what the "existing" privileges,
protections, or labels are that caused the failure (open, change owner
and group, change mode, etc.).
Furthermore, if we fall into the path where we'll be opening / creating
the file using VIR_FILE_OPEN_FORK, we need to first unlink/delete the file
we created in the first path; otherwise, the attempt by the child process
to open as some specific user:group may fail because the file was already
created using nfsnobody:nfsnobody. Again, if we didn't create the file we
don't want to blindly delete what already exists. Thus, a second reason for
the original check to set need_unlink to false when we find the file with
CREAT set, but already existing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
A gnulib change (commit id 'beae0bdc') causes ENOTCONN to be returned
from recvfd which causes us to fall into the throwaway waitpid() call
and return ENOTCONN to the caller, this then gets displayed during
a 'virsh save' when using a root squashed NFS environment that's trying
to save the file as something other than root:root.
This patch will add the additional check for ENOTCONN to force the code
into the waitpid loop looking for the actual status from the _exit()'d
child fork.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '540c339a' to fix issues with reference counting and transient
domains moved the qemuDomainObjEndAsyncJob call prior to the attempt to
restart the guest CPU's resulting in an error:
error: Failed to save domain rhel70 to /tmp/pl/rhel70.save
error: internal error: unexpected async job 3
when (ret != 0) - eg, the error path from qemuDomainSaveMemory.
This patch will adjust the logic to call the EndAsyncJob only after
we've tried to restart the guest CPUs. It also needs to adjust the
test for qemuDomainRemoveInactive to add the ret == 0 condition.
Additionally, if we get to endjob: because of some error earlier, then
we need to save that error in the event the CPU restart logic fails.
We don't want to return the error from CPU restart failure, rather we
want to return the error from the failed save that caused us to fall
into the retry to start the CPU logic.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Well, even though users can pass the list of UEFI:NVRAM pairs at the
configure time, we may maintain the list of widely available UEFI
ourselves too. And as arm64 begin to rises, OVMF was ported there too.
With a slight name change - it's called AAVMF, with AAVMF_CODE.fd
being the UEFI firmware and AAVMF_VARS.fd being the NVRAM store file.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up until now there are just two ways how to specify UEFI paths to
libvirt. The first one is editing qemu.conf, the other is editing
qemu_conf.c and recompile which is not that fancy. So, new
configure option is introduced: --with-loader-nvram which takes a
list of pairs of UEFI firmware and NVRAM store. This way, the
compiled in defaults can be passed during compile time without
need to change the code itself.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When compiling without WITH_MACVTAP, we can get:
'unsupported flags (0x1) in function
virNetDevMacVLanCreateWithVPortProfile'
on an attempt to start a domain.
Remove the flag check to reach the more helpful error:
Cannot create macvlan devices on this platform
https://bugzilla.redhat.com/show_bug.cgi?id=1186928
https://bugzilla.redhat.com/show_bug.cgi?id=1138516
If the provided volume name doesn't match what parted generated as the
partition name, then return a failure.
Update virsh.pod and formatstorage.html.in to describe the 'name' restriction
for disk pools as well as the usage of the <target>'s <format type='value'>.
When removing a volume that is the extended partition, all the logical
volume partitions that exist within the extended partition will also be
removed, so we need to refresh the pool to have the updated list
During virStorageBackendDiskMakeDataVol processing, if we find an extended
partition, then handle it specially when updating the capacity/allocation
rather than calling virStorageBackendUpdateVolInfo.
As it turns out, once a logical partition exists, any attempt to refresh
the pool or after libvirtd restart/reload will result in a failure to open
the extended partition device resulting in the inability to start the pool.
The downside to this is we will lose the <permissions> and <timestamps> for
the extended partition upon subsequent restart, refresh, reload since the
stat() in virStorageBackendUpdateVolTargetInfoFD will not be called. However,
since it's really only a container and shouldn't directly be used for
storage that seems reasonable.
Therefore, only use the existing code that already had a comment about
getting the allocation wrong for extended partitions for just the setting
of the extended partition data.
While checking the existing partitions in virStorageBackendDiskPartFormat,
the code would erroneously compare the volume target format type (eg, the
virStoragePartedFsType) rather than the source partition type (eg, the
virStorageVolTypeDisk) which is set during virStorageBackendDiskReadPartitions.
During virStorageBackendDiskCreateVol if virStorageBackendDiskReadPartitions
fails, then we were leaving with an error and a partition on the disk for
which there was no corresponding volume and used space on the disk which
could be reclaimable through direct parted activity. On a subsequent restart,
reload, or refresh the volume may magically appear too.
Move the API to before virStorageBackendDiskCreateVol in order to be
able to call the DeleteVol API when virStorageBackendDiskReadPartitions
fails so that we don't by chance leave a partition on the disk.
https://bugzilla.redhat.com/show_bug.cgi?id=1183890
When we try to update a xml to a image file, we will clear the
graphics passwd settings, because we do not pass VIR_DOMAIN_XML_SECURE
to qemuDomainDefCopy, qemuDomainDefFormatBuf won't format the passwd.
Add VIR_DOMAIN_XML_SECURE flag when we call qemuDomainDefCopy
in qemuDomainSaveImageUpdateDef.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
This way the device is in vmdef only if ret = 0 and the caller
(qemuDomainAttachDeviceFlags) does not free it.
Otherwise it might get double freed by qemuProcessStop
and qemuDomainAttachDeviceFlags if the domain crashed
in monitor after we've added it to vm->def.
Do the allocation first, then add the actual device.
The second part should never fail. This is good
for live hotplug where we don't want to remove the device
on OOM after the monitor command succeeded.
The only change in behavior is that on failure, the
vmdef->consoles array is freed, not just the first console.
Record the index of each host-side veth device created and report
them to systemd, so they show up in machinectl status for the VM.
lxc-shell(95449419f969d649d9962566ec42af7d)
Since: Fri 2015-01-16 16:53:37 GMT; 3s ago
Leader: 28085 (sh)
Service: libvirt-lxc; class container
Iface: vnet0
Address: fe80::216:3eff:fe00:c317%124
OS: Fedora 21 (Twenty One)
Unit: machine-lxc\x2dshell.scope
└─28085 /bin/sh
Add more logging to the lxc controller and container files to
facilitate debugging startup problems. Also make it clear when
the container is going to close stdout and thus no longer do
any logging.
Don't create the cgroups ahead of launching the container since
there is no need for the limits to apply during initial bootstrap.
Create the cgroup after the container PID is known and tell
systemd the initpid is the leader, instead of the controller
pid.
Currently when launching the LXC controller we first write out
the plain, inactive XML configuration, then launch the controller,
then replace the file with the live status XML configuration.
By good fortune this hasn't caused any problems other than some
misleading error messages during failure scenarios.
This simplifies the code so it only writes out the XML once and
always writes the live status XML. To do this we need to handshake
with the child process, to make execution pause just before exec()
so we can write the XML status with the child PID present.
Currently the lxc controller process itself is responsible for
daemonizing itself into the background and writing out its pid
file. The lxc driver would fork the controller and then attempt
to connect to the lxc monitor. This connection would only
succeed after the controller has backgrounded itself, setup
cgroups and written its pid file, so startup was race free.
The problem is that we need to delay create of the cgroups to
much later, such that we can tell systemd the container init
pid when we create the cgroups. If we delay cgroup creation
though the current synchronization won't work.
A second problem is that the controller needs the XML config
of the guest. Currently we write out the plain virDomainDefPtr
XML before starting the controller, and then later replace it
with the full virDomainObjPtr status XML. This is kind of gross
and also means that the controller doesn't get a record of the
live XML config right away. This means it doesn't have a record
of the veth device names either and so can't give that info
to systemd when creating the cgroups.
To address this we change the startup sequencing. The goal
is that we want to get the PID as soon as possible, before
the LXC controller even starts. So we stop letting the LXC
controller daemonize itself, and instead use virCommand's
built-in capabilities. This daemonizes and writes the PID
before LXC controller is exec'd. So the driver can read
the PID as soon as virCommandRun returns. It is no longer
safe to connect to the monitor or detect the cgroups though.
Fortunately the LXC controller already has a second point
of synchronization. Immediately before its event loop
starts running, it performs a handshake with the driver.
So we move the opening of the monitor connection and cgroup
detection after this synchronization point.
Build the pidfile string once when starting a guest and then
use the same string thereafter. This will benefit following
patches which need the pidfile string in more situations.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In many cases where we invoke virSystemdTerminateMachine the
process(es) will have already gone away on their own accord.
In these cases we log an error message that the machine does
not exist. We should catch this particular error and simply
ignore it, so we don't pollute the logs.
When creating a RAW file, we don't take advantage
of clone of btrfs.
Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
a reflink copy.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A bunch of code is wrapped in #if WITH_LIBVIRTD in order to
enable the virStateDriver to be disabled when libvirtd is not
built. Disabling this code doesn't have any real functional
benefit beyond removing 1 pointer from the virConnectPtr struct,
while having a cost of many more conditionals.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The code modifies the domain configuration but doesn't take a MODIFY
type job to do so.
This patch also fixes a few very long lines of code around the touched
parts.
Coverity reports that my commit af1c98e introduced
two memory leaks:
the cpumap if ncpus == 0 in virCgroupGetPercpuStats
and the params array in the test of the function.
The virDBusMethodCall method has a DBusError as one of its
parameters. If the caller wants to pass a non-NULL value
for this, it immediately makes the calling code require
DBus at build time. This has led to breakage of non-DBus
builds several times. It is desirable that only the virdbus.c
file should need WITH_DBUS conditionals, so we must ideally
remove the DBusError parameter from the method.
We can't simply raise a libvirt error, since the whole point
of this parameter is to give the callers a way to check if
the error is one they want to ignore, without having the logs
polluted with an error message. So, we add a virErrorPtr
parameter which the caller can then either ignore or raise
using the new virReportErrorObject method.
This new method is distinct from virSetError in that it
ensures the logging hooks are run.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
For distros that want to add versioned machine types, they will add
(downstream) machine types like "virt-foo-1.2.3". Detect these as
MMIO too.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Previous patch of this series fixed the issue with adding a new PCI bridge
when all the slots were reserved by devices with user specified addresses.
In case there are still some PCI devices waiting to get a slot reserved
by qemuAssignDevicePCISlots, this means a new bus needs to be
created along with a corresponding bridge controller. By adding an
additional check, this scenario now results in a reasonable error
instead of generating wrong qemu command line.
Commit 93c8ca tried to fix the issue with auto-adding of a PCI bridge
controller, but didn't work properly in all scenarios.
This patch provides a better fix of the issue when all slots on a PCI bus
are reserved by devices with user specified addresses and no additional
bridges need to be created.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132900
In order to be able to test for fully reserved PCI buses, assignment of
PCI slots for integrated devices needs to be moved to a separate function.
This also might be a good preparation if we decide to add support for
other chipsets as well.
Move qemuDomainAssignPCIAddresses after the definition
of the static function qemuDomainValidateDevicePCISlotsQ35.
This lets us define a new static function using
qemuDomainValidateDevicePCISlots* and use it in
qemuDomainAssignPCIAddresses without a forward declaration.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Previously the function returned either -1 in case of an error or 0 on
success. However, we should also distinguish between a case we
successfully added a controller and a case there wasn't a need to add any
controller
As it turned out, fix of dead code 419a22 changed the affected condition
from "never true" to "always true", so better fix would be to change the
return code of virDomainMaybeAddController from 0 to 1 if
a new bridge has been added, thus distinguishing case when we didn't need to
add any controller and case we successfully added one.
The return code is changed in the next commit
Clang found possible dereference of NULL pointer which is right.
Function 'esxVI_LookupTaskInfoByTask' should find a task info. The issue
is that we could return 0 and leave 'taksInfo' pointer NULL because if
there is no match we simply end the search loop end set 'result' to 0.
Every caller count on the fact that if the return value is 0 than it's
safe to dereference 'taskInfo'. We should return 0 only in case we found
something and the '*taskInfo' is not NULL.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Clang found that we are passing variable with wrong enum type to
'xenapiCrashExitEnum2virDomainLifecycle' function. This is probably
copy-paste typo as the correct variable exists in the code, but it isn't
used.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Per-cpu stats are only shown for present CPUs in the cgroups,
but we were only parsing the largest CPU number from
/sys/devices/system/cpu/present and looking for stats even for
non-present CPUs.
This resulted in:
internal error: cpuacct parse error
The ACL check didn't check the VIR_DOMAIN_XML_SECURE flag and the
appropriate permission for it. Found via code inspection while fixing
permissions for save images.
https://bugzilla.redhat.com/show_bug.cgi?id=1164627
When using 'virsh attach-device' to hotplug an unsupported console type
into a qemu guest the attachment would succeed as the command line
formatter didn't report error in such case.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1130390
The listen address is not mandatory for <interface type='server'>
but when it's not specified, we've been formatting it as:
-netdev socket,listen=(null):5558,id=hostnet0
which failed with:
Device 'socket' could not be initialized
Omit the address completely and only format the port in the listen
attribute.
Also fix the schema to allow specifying a model.
This adds a new "localOnly" attribute on the domain element of the
network xml. With this set to "yes", DNS requests under that domain
will only be resolved by libvirt's dnsmasq, never forwarded upstream.
This was how it worked before commit f69a6b987d, and I found that
functionality useful. For example, I have my host's NetworkManager
dnsmasq configured to forward that domain to libvirt's dnsmasq, so I can
easily resolve guest names from outside. But if libvirt's dnsmasq
doesn't know a name and forwards it to the host, I'd get an endless
forwarding loop. Now I can set localOnly="yes" to prevent the loop.
Signed-off-by: Josh Stone <jistone@redhat.com>
Using the same driver multiple times is pointless and
it can result in confusing errors:
$ virsh start test
error: Failed to start domain test
error: internal error: security label already defined for VM
https://bugzilla.redhat.com/show_bug.cgi?id=1153891
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
If the domain crashed while we were in monitor,
we cannot rely on the REALLOC done on live definition,
since vm->def now points to the persistent definition.
Skip adding the attached devices to domain definition
if the domain crashed.
In AttachChrDevice, the chardev was already added to the
live definition and freed by qemuProcessStop in the case
of a crash. Skip the device removal in that case.
Also skip audit if the domain crashed in the meantime.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
In the device type-specific functions, exit early
if the domain has disappeared, because the cleanup
should have been done by qemuProcessStop.
Check the return value in processDeviceDeletedEvent
and qemuProcessUpdateDevices.
Skip audit and removing the device from live def because
it has already been cleaned up.
The path to the pty of a Xen PV console is set only in
virDomainOpenConsole. But this is done too late. A call to
virDomainGetXMLDesc done before OpenConsole will not have the path to
the pty, but a call after OpenConsole will.
e.g. of the current issue.
Starting a domain with '<console type="pty"/>'
Then:
virDomainGetXMLDesc():
<devices>
<console type='pty'>
<target type='xen' port='0'/>
</console>
</devices>
virDomainOpenConsole()
virDomainGetXMLDesc():
<devices>
<console type='pty' tty='/dev/pts/30'>
<source path='/dev/pts/30'/>
<target type='xen' port='0'/>
</console>
</devices>
The patch intend to have the TTY path on the first call of GetXMLDesc.
This is done by setting up the path at domain start up instead of in
OpenConsole.
https://bugzilla.redhat.com/show_bug.cgi?id=1170743
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
It's possible to create a container with existing
disk image as root filesystem. You need to remove
existing disks from PCS VM config and then add a new
one, pointing to your image. And then call PrlVm_RegEx
with PRNVM_PRESERVE_DISK flag.
With this patch you can create such container with
something like this for new domain XML config:
<filesystem type='file' accessmode='passthrough'>
<driver type='ploop' format='ploop'/>
<source file='/path-to-image'/>
<target dir='/'/>
</filesystem>
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Handle information about filesystems in domain config
and add corresponding devices to container via
parallels sdk.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
PCS removes disk image from filesystem, if you remove it
from config. There is a special flag PVCF_DETACH_HDD_BUNDLE
which allow to remove disk only from VM/CT config.
If you call virDomainDefine and remove some disk from
config it should be preserved, so call PrlVm_CommitEx
always with flag PVCF_DETACH_HDD_BUNDLE.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Obtain information about container's filesystems and
store it in virDomainDef structure.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>