Loggers must be defined in the correct heirarchial order to ensure that
child loggers inherit the level defined on the parent logger. Otherwise
need to traverse the entire tree to modify the level.
Libvirt p9 file system passthrough requires the tag be less than
31 bytes. Using the mount point as the tag can easily exceed this.
In KVM passthough the "target" attribuite in the XML is acutally the
mount tag.
Fixed the mounting script to use the same MD5 to mount the right
filesystem
Fixes#323
Vagrant only requires an ip **if** `type` is set **and** is not `dhcp`. The
only other type, currently, is `nfs`. Obviously nfs requires an IP so under
that circumstance or if `type` is set to anything but dhcp, vagrant will
error if an ip is not assigned.
If `type` is `nil`, it is deleted from the hash.
* Moved network lookup by IP to `lookup_network_by_ip`
* Lookup by name only if lookup by ip fails *and* forward_mode is
`veryisolated`. This will preserve existing functionality.
* Added network creation ability to `handle_network_name_option`
** Network **must** be veryisolated to be created by name
** Network **must not** have an ip address to be used in the
`handle_network_name_option` path.
Minor changes:
* corrected inverted autostart logic
* screen formatting
Fixes#402
Disable action locks on read actions so that Vagrant may return ssh
configuration information while other actions are being run on the
machine.
Vagrant locks each machine when an action is being performed on it, to
ensure that it cannot be modified by multiple actions at the same time.
However certain read operations such as retrieving the ssh infomation
may be called either by other machines when executing in parallel or if
the user executes 'vagrant ssh <machine>', during a provision step. When
this occurs Vagrant will throw an error telling the user that multiple
actions may not be executed in parallel and they must wait until the
existing action is finished or see if vagrant/ruby must be terminated.
Example issue is that the ansible provisioner builds an inventory file
where for each machine executing it will query the ssh_info of all other
active machines. When run serially, this will work as expected, however
in parallel it will only succeed if all the actions associated with
ssh_info are complete before any machine begins executing ansible
itself.
Fixes#420
With veryisolated networks, there is no ip address assigned thus
matching the network name and ip address fails. This allows the null ip
address to match a named network when that forward_mode is set.
Useful when configuring Virtualized Switch topologies using Switch VMs like Cumulus
Linux.
vagrant network interface auto_config is disabled in the code. This may be
re-enabled in a future update, once it is better understood how to
auto configure these types of links. All guestOS ports, for now, that are
connected to a tcp tunnel are in a link down state.
TCP tunnels allow guest OSes to exchange STP and LLDP information as
if they are directly connected to each other.
This is not possible with the default virtual switch network mode.
Reference:
https://libvirt.org/formatdomain.html#elementsNICSTCP
Configuring networks based solely on slot numbers doesn't work very
well, since there's no way to guarantee that the interface Vagrant
finds is the same one that vagrant-libvirt created at that index.
For example, Vagrant's Fedora configure_networks action does this:
machine.communicate.sudo("ls /sys/class/net | grep -v lo") do |_, result|
interface_names = result.split("\n")
end
interface_names = networks.map do |network|
"#{interface_names[network[:interface]]}"
end
which means that if your image has 'docker' pre-installed, then
interface_names[0] = "docker0" and hilarity ensues, with the first
non-management network being assigned to the vagrant-libvirt
management interface.
Since interface names are very unreliable (they can be renamed by
udev at will or when hardware changes) the only way to ensure that
the interface vagrant-libvirt attaches to the domain maps to the
correct one inside the VM is by MAC address. Pull the MAC address
out of the libvirt config once the interface has been attached and
pass that up to Vagrant so we have a chance of doing the right thing.
Allow control of the volume size to be increased from the box default
virtual_size value so that it is possible to use a box configured with a
minimal initial disk size and create virtual guests with larger disk
sizes.
Warn the user and ignore sizes that are less than the box size, and
inform them that a manual filesystem resize will be needed to take
advantage of the additional available disk space.
fixes: #37
`virt-sysprep` will, by default, remove all `.ssh` directories from all
users' home directories. Since we need to have the default Vagrant
insecure keypair in the `authorized_keys`, this causes problems later
when trying to use the packaged box, as Vagrant is unable to log in.
`virt-sysprep` has the ability to disable options via the `--operations`
argument; the `ssh-userdir` option should be disabled.
Compare storage pool on the found volumes to avoid selecting a disk from
a different and unreachable to the domain storage pool.
Users may move between storage pools by configuring the driver which
will mean that it is possible to find the same image name in multiple
storage pools and incorrectly perform operations based on the one not
associated with the currently specified pool.
When using persistent images to attach to storage this will cause domain
creation to fail since although the image was found in one of the
storage pools, it may not be connected since it does not exist in the
storage pool requested for the VM.
Exit the synchronize block before performing calling the next item in
the middleware chain otherwise the mutex lock applies to the entire
provision sequence from that point onwards until the entire chain has
returned to the same point.
Executing a return statement inside a block does not exit the block
automatically. Instead only when the statement returned has been
processed is the wrapping block exited.
Since the return call is actually calling the next action in the chain,
this change ensures that the mutex is not held for subsequent actions
and thus allows vagrant to perform remaining actions in parallel.
Without this all provisioning of machines will always occur serially
instead of parallel when requested.