This is required because in current versions of libvirt, it is not
possible to specify a boot order when attaching a device; therefore we
can only parse the entire domain XML after all devices have been created
and then assign boot ordering according to the Vagrantfile
specification. This allows us to specify exact boot order for hd, cdrom
and network.
Vagrant already supports VMs without boxes with Docker.
We leverage this in libvirt as well. The use case for this is to PXE
oot a vagrant VM that is then installed over the network then
installed over the network; a different use case would be to test PXE
booted clients that do not use a hard drive whatsoever.
```ruby
Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |libvirt|
# very useful when having mouse issues when viewing VMs via VNC
libvirt.input :type => "tablet", :bus => "usb"
end
end
```
Remove the ReadSSHInfo and ReadState actions and corresponding calls to
dispatch queries by vagrant on the provider for current `ssh_info` and
`state` to be handled by actions. Change the corresponding methods added
to the Driver and Provider classes to avoid modifying `machine.id`
directly and allow vagrant to take care of resetting it whenever `state`
returns :not_created.
This ensures that both `ssh_info` and `state` may be called by other
threads, such as the ansible provisioner building the inventory file,
on machines without causing exceptions due to machine locks preventing
modification (setting `machine.id` to nil) and Batch locking preventing
multiple sets of actions being executed on the same machine by
different threads/processes.
Follows the design of the in-tree docker provider for vagrant.
Handle libvirt connection through a driver located within the provider
so it can be reached via the machine settings. Adopt the format followed
by the docker/virtualbox providers as this is likely to remain well
supported.
Will allow queries to be made without needing to setup a specific action
which is important when dealing with parallel machine provisioning.
Calling actions from other threads to retrieve information on the state
of the other running machines currently will cause vagrant to complain
about the machine being locked.
Loggers must be defined in the correct heirarchial order to ensure that
child loggers inherit the level defined on the parent logger. Otherwise
need to traverse the entire tree to modify the level.
Libvirt p9 file system passthrough requires the tag be less than
31 bytes. Using the mount point as the tag can easily exceed this.
In KVM passthough the "target" attribuite in the XML is acutally the
mount tag.
Fixed the mounting script to use the same MD5 to mount the right
filesystem
Fixes#323
Vagrant only requires an ip **if** `type` is set **and** is not `dhcp`. The
only other type, currently, is `nfs`. Obviously nfs requires an IP so under
that circumstance or if `type` is set to anything but dhcp, vagrant will
error if an ip is not assigned.
If `type` is `nil`, it is deleted from the hash.
* Moved network lookup by IP to `lookup_network_by_ip`
* Lookup by name only if lookup by ip fails *and* forward_mode is
`veryisolated`. This will preserve existing functionality.
* Added network creation ability to `handle_network_name_option`
** Network **must** be veryisolated to be created by name
** Network **must not** have an ip address to be used in the
`handle_network_name_option` path.
Minor changes:
* corrected inverted autostart logic
* screen formatting
Fixes#402
Disable action locks on read actions so that Vagrant may return ssh
configuration information while other actions are being run on the
machine.
Vagrant locks each machine when an action is being performed on it, to
ensure that it cannot be modified by multiple actions at the same time.
However certain read operations such as retrieving the ssh infomation
may be called either by other machines when executing in parallel or if
the user executes 'vagrant ssh <machine>', during a provision step. When
this occurs Vagrant will throw an error telling the user that multiple
actions may not be executed in parallel and they must wait until the
existing action is finished or see if vagrant/ruby must be terminated.
Example issue is that the ansible provisioner builds an inventory file
where for each machine executing it will query the ssh_info of all other
active machines. When run serially, this will work as expected, however
in parallel it will only succeed if all the actions associated with
ssh_info are complete before any machine begins executing ansible
itself.
Fixes#420
With veryisolated networks, there is no ip address assigned thus
matching the network name and ip address fails. This allows the null ip
address to match a named network when that forward_mode is set.