Add support for VMs with no box

Vagrant already supports VMs without boxes with Docker.
We leverage this in libvirt as well. The use case for this is to PXE
oot a vagrant VM that is then installed over the network then
installed over the network; a different use case would be to test PXE
booted clients that do not use a hard drive whatsoever.
This commit is contained in:
Gerben Meijer
2015-08-10 17:10:23 +02:00
parent 262e8eed59
commit bbbd804f28
6 changed files with 133 additions and 79 deletions

View File

@@ -22,6 +22,7 @@ welcome and can help a lot :-)
* Snapshots via [sahara](https://github.com/jedi4ever/sahara).
* Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
* Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
* Support VMs with no box for PXE boot purposes
## Future work
@@ -425,6 +426,26 @@ Vagrant.configure("2") do |config|
end
```
## No box and PXE boot
There is support for PXE booting VMs with no disks as well as PXE booting VMs with blank disks. There are some limitations:
* No provisioning scripts are ran
* No network configuration is being applied to the VM
* No SSH connection can be made
* ```vagrant halt``` will only work cleanly if the VM handles ACPI shutdown signals
In short, VMs without a box can be created, halted and destroyed but all other functionality cannot be used.
An example for a PXE booted VM with no disks whatsoever:
Vagrant.configure("2") do |config|
config.vm.define :pxeclient do |pxeclient|
pxeclient.vm.provider :libvirt do |domain|
domain.boot 'network'
end
end
## SSH Access To VM
vagrant-libvirt supports vagrant's [standard ssh settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).

View File

@@ -23,6 +23,12 @@ module VagrantPlugins
# Create VM if not yet created.
if !env[:result]
b2.use SetNameOfDomain
if !env[:machine].box
b2.use CreateDomain
b2.use CreateNetworks
b2.use CreateNetworkInterfaces
b2.use StartDomain
else
b2.use HandleStoragePool
b2.use HandleBox
b2.use HandleBoxImage
@@ -44,6 +50,7 @@ module VagrantPlugins
b2.use ForwardPorts
b2.use SetHostname
# b2.use SyncFolders
end
else
b2.use action_start
end
@@ -68,6 +75,11 @@ module VagrantPlugins
next
end
if !env[:machine].box
# With no box, we just care about network creation and starting it
b3.use CreateNetworks
b3.use StartDomain
else
# VM is not running or suspended.
b3.use Provision
@@ -93,6 +105,7 @@ module VagrantPlugins
end
end
end
end
# This is the action that is primarily responsible for halting the
# virtual machine.
@@ -154,7 +167,9 @@ module VagrantPlugins
if !env[:result]
# Try to remove stale volumes anyway
b2.use SetNameOfDomain
if env[:machine].box
b2.use RemoveStaleVolume
end
if !env[:result]
b2.use MessageNotCreated
end

View File

@@ -72,18 +72,29 @@ module VagrantPlugins
@os_type = 'hvm'
# Get path to domain image from the storage pool selected.
# Get path to domain image from the storage pool selected if we have a box.
if env[:machine].box
actual_volumes =
env[:machine].provider.driver.connection.volumes.all.select do |x|
x.pool_name == @storage_pool_name
end
domain_volume = ProviderLibvirt::Util::Collection.find_matching(
actual_volumes, "#{@name}.img")
actual_volumes,"#{@name}.img")
raise Errors::DomainVolumeExists if domain_volume.nil?
@domain_volume_path = domain_volume.path
end
# If we have a box, take the path from the domain volume and set our storage_prefix.
# If not, we dump the storage pool xml to get its defined path.
# the default storage prefix is typically: /var/lib/libvirt/images/
if env[:machine].box
storage_prefix = File.dirname(@domain_volume_path) + '/' # steal
else
storage_pool = env[:machine].provider.driver.connection.client.lookup_storage_pool_by_name(@storage_pool_name)
raise Errors::NoStoragePool if storage_pool.nil?
xml = Nokogiri::XML(storage_pool.xml_desc)
storage_prefix = xml.xpath("/pool/target/path").inner_text.to_s + '/'
end
@disks.each do |disk|
disk[:path] ||= _disk_name(@name, disk)
@@ -125,7 +136,9 @@ module VagrantPlugins
env[:ui].info(" -- Cpus: #{@cpus}")
env[:ui].info(" -- Memory: #{@memory_size / 1024}M")
env[:ui].info(" -- Loader: #{@loader}")
if env[:machine].box
env[:ui].info(" -- Base box: #{env[:machine].box.name}")
end
env[:ui].info(" -- Storage pool: #{@storage_pool_name}")
env[:ui].info(" -- Image: #{@domain_volume_path} (#{env[:box_virtual_size]}G)")
env[:ui].info(" -- Volume Cache: #{@domain_volume_cache}")

View File

@@ -125,6 +125,8 @@ module VagrantPlugins
# Continue the middleware chain.
@app.call(env)
if env[:machine].box
# Configure interfaces that user requested. Machine should be up and
# running now.
networks_to_configure = []
@@ -163,6 +165,7 @@ module VagrantPlugins
env[:machine].guest.capability(
:configure_networks, networks_to_configure)
end
end
private

View File

@@ -23,7 +23,7 @@ module VagrantPlugins
Config
end
provider('libvirt', parallel: true) do
provider('libvirt', parallel: true, box_optional: true) do
require_relative 'provider'
Provider
end

View File

@@ -49,12 +49,14 @@
</features>
<clock offset='utc'/>
<devices>
<% if @domain_volume_path %>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='<%= @domain_volume_cache %>'/>
<source file='<%= @domain_volume_path %>'/>
<%# we need to ensure a unique target dev -%>
<target dev='vda' bus='<%= @disk_bus %>'/>
</disk>
<% end %>
<%# additional disks -%>
<% @disks.each do |d| -%>
<disk type='file' device='disk'>