Add support for VMs with no box

Vagrant already supports VMs without boxes with Docker.
We leverage this in libvirt as well. The use case for this is to PXE
oot a vagrant VM that is then installed over the network then
installed over the network; a different use case would be to test PXE
booted clients that do not use a hard drive whatsoever.
This commit is contained in:
Gerben Meijer
2015-08-10 17:10:23 +02:00
parent 262e8eed59
commit bbbd804f28
6 changed files with 133 additions and 79 deletions

View File

@@ -22,6 +22,7 @@ welcome and can help a lot :-)
* Snapshots via [sahara](https://github.com/jedi4ever/sahara). * Snapshots via [sahara](https://github.com/jedi4ever/sahara).
* Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/). * Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
* Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate). * Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
* Support VMs with no box for PXE boot purposes
## Future work ## Future work
@@ -425,6 +426,26 @@ Vagrant.configure("2") do |config|
end end
``` ```
## No box and PXE boot
There is support for PXE booting VMs with no disks as well as PXE booting VMs with blank disks. There are some limitations:
* No provisioning scripts are ran
* No network configuration is being applied to the VM
* No SSH connection can be made
* ```vagrant halt``` will only work cleanly if the VM handles ACPI shutdown signals
In short, VMs without a box can be created, halted and destroyed but all other functionality cannot be used.
An example for a PXE booted VM with no disks whatsoever:
Vagrant.configure("2") do |config|
config.vm.define :pxeclient do |pxeclient|
pxeclient.vm.provider :libvirt do |domain|
domain.boot 'network'
end
end
## SSH Access To VM ## SSH Access To VM
vagrant-libvirt supports vagrant's [standard ssh settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html). vagrant-libvirt supports vagrant's [standard ssh settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).

View File

@@ -23,27 +23,34 @@ module VagrantPlugins
# Create VM if not yet created. # Create VM if not yet created.
if !env[:result] if !env[:result]
b2.use SetNameOfDomain b2.use SetNameOfDomain
b2.use HandleStoragePool if !env[:machine].box
b2.use HandleBox b2.use CreateDomain
b2.use HandleBoxImage b2.use CreateNetworks
b2.use CreateDomainVolume b2.use CreateNetworkInterfaces
b2.use CreateDomain b2.use StartDomain
else
b2.use HandleStoragePool
b2.use HandleBox
b2.use HandleBoxImage
b2.use CreateDomainVolume
b2.use CreateDomain
b2.use Provision b2.use Provision
b2.use PrepareNFSValidIds b2.use PrepareNFSValidIds
b2.use SyncedFolderCleanup b2.use SyncedFolderCleanup
b2.use SyncedFolders b2.use SyncedFolders
b2.use PrepareNFSSettings b2.use PrepareNFSSettings
b2.use ShareFolders b2.use ShareFolders
b2.use CreateNetworks b2.use CreateNetworks
b2.use CreateNetworkInterfaces b2.use CreateNetworkInterfaces
b2.use StartDomain b2.use StartDomain
b2.use WaitTillUp b2.use WaitTillUp
b2.use ForwardPorts b2.use ForwardPorts
b2.use SetHostname b2.use SetHostname
# b2.use SyncFolders # b2.use SyncFolders
end
else else
b2.use action_start b2.use action_start
end end
@@ -68,27 +75,33 @@ module VagrantPlugins
next next
end end
# VM is not running or suspended. if !env[:machine].box
# With no box, we just care about network creation and starting it
b3.use CreateNetworks
b3.use StartDomain
else
# VM is not running or suspended.
b3.use Provision b3.use Provision
# Ensure networks are created and active # Ensure networks are created and active
b3.use CreateNetworks b3.use CreateNetworks
b3.use PrepareNFSValidIds b3.use PrepareNFSValidIds
b3.use SyncedFolderCleanup b3.use SyncedFolderCleanup
b3.use SyncedFolders b3.use SyncedFolders
# Start it.. # Start it..
b3.use StartDomain b3.use StartDomain
# Machine should gain IP address when comming up, # Machine should gain IP address when comming up,
# so wait for dhcp lease and store IP into machines data_dir. # so wait for dhcp lease and store IP into machines data_dir.
b3.use WaitTillUp b3.use WaitTillUp
b3.use ForwardPorts b3.use ForwardPorts
b3.use PrepareNFSSettings b3.use PrepareNFSSettings
b3.use ShareFolders b3.use ShareFolders
end
end end
end end
end end
@@ -154,7 +167,9 @@ module VagrantPlugins
if !env[:result] if !env[:result]
# Try to remove stale volumes anyway # Try to remove stale volumes anyway
b2.use SetNameOfDomain b2.use SetNameOfDomain
b2.use RemoveStaleVolume if env[:machine].box
b2.use RemoveStaleVolume
end
if !env[:result] if !env[:result]
b2.use MessageNotCreated b2.use MessageNotCreated
end end

View File

@@ -72,18 +72,29 @@ module VagrantPlugins
@os_type = 'hvm' @os_type = 'hvm'
# Get path to domain image from the storage pool selected. # Get path to domain image from the storage pool selected if we have a box.
actual_volumes = if env[:machine].box
env[:machine].provider.driver.connection.volumes.all.select do |x| actual_volumes =
x.pool_name == @storage_pool_name env[:machine].provider.driver.connection.volumes.all.select do |x|
end x.pool_name == @storage_pool_name
domain_volume = ProviderLibvirt::Util::Collection.find_matching( end
actual_volumes, "#{@name}.img") domain_volume = ProviderLibvirt::Util::Collection.find_matching(
raise Errors::DomainVolumeExists if domain_volume.nil? actual_volumes,"#{@name}.img")
@domain_volume_path = domain_volume.path raise Errors::DomainVolumeExists if domain_volume.nil?
@domain_volume_path = domain_volume.path
end
# If we have a box, take the path from the domain volume and set our storage_prefix.
# If not, we dump the storage pool xml to get its defined path.
# the default storage prefix is typically: /var/lib/libvirt/images/ # the default storage prefix is typically: /var/lib/libvirt/images/
storage_prefix = File.dirname(@domain_volume_path) + '/' # steal if env[:machine].box
storage_prefix = File.dirname(@domain_volume_path) + '/' # steal
else
storage_pool = env[:machine].provider.driver.connection.client.lookup_storage_pool_by_name(@storage_pool_name)
raise Errors::NoStoragePool if storage_pool.nil?
xml = Nokogiri::XML(storage_pool.xml_desc)
storage_prefix = xml.xpath("/pool/target/path").inner_text.to_s + '/'
end
@disks.each do |disk| @disks.each do |disk|
disk[:path] ||= _disk_name(@name, disk) disk[:path] ||= _disk_name(@name, disk)
@@ -125,7 +136,9 @@ module VagrantPlugins
env[:ui].info(" -- Cpus: #{@cpus}") env[:ui].info(" -- Cpus: #{@cpus}")
env[:ui].info(" -- Memory: #{@memory_size / 1024}M") env[:ui].info(" -- Memory: #{@memory_size / 1024}M")
env[:ui].info(" -- Loader: #{@loader}") env[:ui].info(" -- Loader: #{@loader}")
env[:ui].info(" -- Base box: #{env[:machine].box.name}") if env[:machine].box
env[:ui].info(" -- Base box: #{env[:machine].box.name}")
end
env[:ui].info(" -- Storage pool: #{@storage_pool_name}") env[:ui].info(" -- Storage pool: #{@storage_pool_name}")
env[:ui].info(" -- Image: #{@domain_volume_path} (#{env[:box_virtual_size]}G)") env[:ui].info(" -- Image: #{@domain_volume_path} (#{env[:box_virtual_size]}G)")
env[:ui].info(" -- Volume Cache: #{@domain_volume_cache}") env[:ui].info(" -- Volume Cache: #{@domain_volume_cache}")

View File

@@ -125,43 +125,46 @@ module VagrantPlugins
# Continue the middleware chain. # Continue the middleware chain.
@app.call(env) @app.call(env)
# Configure interfaces that user requested. Machine should be up and
# running now.
networks_to_configure = []
adapters.each_with_index do |options, slot_number| if env[:machine].box
# Skip configuring the management network, which is on the first interface. # Configure interfaces that user requested. Machine should be up and
# It's used for provisioning and it has to be available during provisioning, # running now.
# ifdown command is not acceptable here. networks_to_configure = []
next if slot_number == 0
next if options[:auto_config] === false adapters.each_with_index do |options, slot_number|
@logger.debug "Configuring interface slot_number #{slot_number} options #{options}" # Skip configuring the management network, which is on the first interface.
# It's used for provisioning and it has to be available during provisioning,
network = { # ifdown command is not acceptable here.
:interface => slot_number, next if slot_number == 0
:use_dhcp_assigned_default_route => options[:use_dhcp_assigned_default_route], next if options[:auto_config] === false
:mac_address => options[:mac], @logger.debug "Configuring interface slot_number #{slot_number} options #{options}"
}
if options[:ip]
network = { network = {
:type => :static, :interface => slot_number,
:ip => options[:ip], :use_dhcp_assigned_default_route => options[:use_dhcp_assigned_default_route],
:netmask => options[:netmask], :mac_address => options[:mac],
}.merge(network) }
else
network[:type] = :dhcp if options[:ip]
network = {
:type => :static,
:ip => options[:ip],
:netmask => options[:netmask],
}.merge(network)
else
network[:type] = :dhcp
end
# do not run configure_networks for tcp tunnel interfaces
next if options.fetch(:tcp_tunnel_type, nil)
networks_to_configure << network
end end
# do not run configure_networks for tcp tunnel interfaces env[:ui].info I18n.t('vagrant.actions.vm.network.configuring')
next if options.fetch(:tcp_tunnel_type, nil) env[:machine].guest.capability(
:configure_networks, networks_to_configure)
networks_to_configure << network
end end
env[:ui].info I18n.t('vagrant.actions.vm.network.configuring')
env[:machine].guest.capability(
:configure_networks, networks_to_configure)
end end
private private

View File

@@ -23,7 +23,7 @@ module VagrantPlugins
Config Config
end end
provider('libvirt', parallel: true) do provider('libvirt', parallel: true, box_optional: true) do
require_relative 'provider' require_relative 'provider'
Provider Provider
end end

View File

@@ -49,12 +49,14 @@
</features> </features>
<clock offset='utc'/> <clock offset='utc'/>
<devices> <devices>
<% if @domain_volume_path %>
<disk type='file' device='disk'> <disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='<%= @domain_volume_cache %>'/> <driver name='qemu' type='qcow2' cache='<%= @domain_volume_cache %>'/>
<source file='<%= @domain_volume_path %>'/> <source file='<%= @domain_volume_path %>'/>
<%# we need to ensure a unique target dev -%> <%# we need to ensure a unique target dev -%>
<target dev='vda' bus='<%= @disk_bus %>'/> <target dev='vda' bus='<%= @disk_bus %>'/>
</disk> </disk>
<% end %>
<%# additional disks -%> <%# additional disks -%>
<% @disks.each do |d| -%> <% @disks.each do |d| -%>
<disk type='file' device='disk'> <disk type='file' device='disk'>