Merge pull request #442 from infernix/box_optional

Make boxes optional and move boot ordering into a separate class
This commit is contained in:
Dmitry Vasilets 2015-08-15 01:01:04 +02:00
commit 454bc78688
8 changed files with 219 additions and 92 deletions

View File

@ -22,6 +22,7 @@ welcome and can help a lot :-)
* Snapshots via [sahara](https://github.com/jedi4ever/sahara).
* Package caching via [vagrant-cachier](http://fgrehm.viewdocs.io/vagrant-cachier/).
* Use boxes from other Vagrant providers via [vagrant-mutate](https://github.com/sciurus/vagrant-mutate).
* Support VMs with no box for PXE boot purposes (Vagrant 1.6 and up)
## Future work
@ -172,7 +173,7 @@ end
* `machine` - Sets machine type. Equivalent to qemu `-machine`. Use `qemu-system-x86_64 -machine help` to get a list of supported machines.
* `machine_arch` - Sets machine architecture. This helps libvirt to determine the correct emulator type. Possible values depend on your version of qemu. For possible values, see which emulator executable `qemu-system-*` your system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`, `i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`, `mips64el`, `mipsel`, `moxie`, `or32`, `ppc`, `ppc64`, `ppcemb`, `s390x`, `sh4`, `sh4eb`, `sparc`, `sparc64`, `tricore`, `unicore32`, `x86_64`, `xtensa`, `xtensaeb`.
* `machine_virtual_size` - Sets the disk size in GB for the machine overriding the default specified in the box. Allows boxes to defined with a minimal size disk by default and to be grown to a larger size at creation time. Will ignore sizes smaller than the size specified by the box metadata. Note that currently there is no support for automatically resizing the filesystem to take advantage of the larger disk.
* `boot` - Change the boot order and enables the boot menu. Possible options are "hd" or "network". Defaults to "hd" with boot menu disabled. When "network" is set first, *all* NICs will be tried before the first disk is tried.
* `boot` - Change the boot order and enables the boot menu. Possible options are "hd", "network", "cdrom". Defaults to "hd" with boot menu disabled. When "network" is set without "hd", only all NICs will be tried; see below for more detail.
* `nic_adapter_count` - Defaults to '8'. Only use case for increasing this count is for VMs that virtualize switches such as Cumulus Linux. Max value for Cumulus Linux VMs is 33.
@ -197,7 +198,8 @@ Vagrant.configure("2") do |config|
The following example shows part of a Vagrantfile that enables the VM to
boot from a network interface first and a hard disk second. This could be
used to run VMs that are meant to be a PXE booted machines.
used to run VMs that are meant to be a PXE booted machines. Be aware that
if `hd` is not specified as a boot option, it will never be tried.
```ruby
Vagrant.configure("2") do |config|
@ -425,6 +427,38 @@ Vagrant.configure("2") do |config|
end
```
## No box and PXE boot
There is support for PXE booting VMs with no disks as well as PXE booting VMs with blank disks. There are some limitations:
* Requires Vagrant 1.6.0 or newer
* No provisioning scripts are ran
* No network configuration is being applied to the VM
* No SSH connection can be made
* ```vagrant halt``` will only work cleanly if the VM handles ACPI shutdown signals
In short, VMs without a box can be created, halted and destroyed but all other functionality cannot be used.
An example for a PXE booted VM with no disks whatsoever:
Vagrant.configure("2") do |config|
config.vm.define :pxeclient do |pxeclient|
pxeclient.vm.provider :libvirt do |domain|
domain.boot 'network'
end
end
And an example for a PXE booted VM with no box but a blank disk which will boot from this HD if the NICs fail to PXE boot::
Vagrant.configure("2") do |config|
config.vm.define :pxeclient do |pxeclient|
pxeclient.vm.provider :libvirt do |domain|
domain.storage :file, :size => '100G', :type => 'qcow2'
domain.boot 'network'
domain.boot 'hd'
end
end
## SSH Access To VM
vagrant-libvirt supports vagrant's [standard ssh settings](https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html).

View File

@ -23,27 +23,36 @@ module VagrantPlugins
# Create VM if not yet created.
if !env[:result]
b2.use SetNameOfDomain
b2.use HandleStoragePool
b2.use HandleBox
b2.use HandleBoxImage
b2.use CreateDomainVolume
b2.use CreateDomain
if !env[:machine].box
b2.use CreateDomain
b2.use CreateNetworks
b2.use CreateNetworkInterfaces
b2.use SetBootOrder
b2.use StartDomain
else
b2.use HandleStoragePool
b2.use HandleBox
b2.use HandleBoxImage
b2.use CreateDomainVolume
b2.use CreateDomain
b2.use Provision
b2.use PrepareNFSValidIds
b2.use SyncedFolderCleanup
b2.use SyncedFolders
b2.use PrepareNFSSettings
b2.use ShareFolders
b2.use CreateNetworks
b2.use CreateNetworkInterfaces
b2.use Provision
b2.use PrepareNFSValidIds
b2.use SyncedFolderCleanup
b2.use SyncedFolders
b2.use PrepareNFSSettings
b2.use ShareFolders
b2.use CreateNetworks
b2.use CreateNetworkInterfaces
b2.use SetBootOrder
b2.use StartDomain
b2.use WaitTillUp
b2.use StartDomain
b2.use WaitTillUp
b2.use ForwardPorts
b2.use SetHostname
# b2.use SyncFolders
b2.use ForwardPorts
b2.use SetHostname
# b2.use SyncFolders
end
else
b2.use action_start
end
@ -68,27 +77,35 @@ module VagrantPlugins
next
end
# VM is not running or suspended.
if !env[:machine].box
# With no box, we just care about network creation and starting it
b3.use CreateNetworks
b3.use SetBootOrder
b3.use StartDomain
else
# VM is not running or suspended.
b3.use Provision
b3.use Provision
# Ensure networks are created and active
b3.use CreateNetworks
# Ensure networks are created and active
b3.use CreateNetworks
b3.use SetBootOrder
b3.use PrepareNFSValidIds
b3.use SyncedFolderCleanup
b3.use SyncedFolders
b3.use PrepareNFSValidIds
b3.use SyncedFolderCleanup
b3.use SyncedFolders
# Start it..
b3.use StartDomain
# Start it..
b3.use StartDomain
# Machine should gain IP address when comming up,
# so wait for dhcp lease and store IP into machines data_dir.
b3.use WaitTillUp
# Machine should gain IP address when comming up,
# so wait for dhcp lease and store IP into machines data_dir.
b3.use WaitTillUp
b3.use ForwardPorts
b3.use PrepareNFSSettings
b3.use ShareFolders
b3.use ForwardPorts
b3.use PrepareNFSSettings
b3.use ShareFolders
end
end
end
end
@ -154,7 +171,9 @@ module VagrantPlugins
if !env[:result]
# Try to remove stale volumes anyway
b2.use SetNameOfDomain
b2.use RemoveStaleVolume
if env[:machine].box
b2.use RemoveStaleVolume
end
if !env[:result]
b2.use MessageNotCreated
end
@ -321,6 +340,7 @@ module VagrantPlugins
autoload :ReadMacAddresses, action_root.join('read_mac_addresses')
autoload :ResumeDomain, action_root.join('resume_domain')
autoload :SetNameOfDomain, action_root.join('set_name_of_domain')
autoload :SetBootOrder, action_root.join('set_boot_order')
# I don't think we need it anymore
autoload :ShareFolders, action_root.join('share_folders')

View File

@ -72,18 +72,29 @@ module VagrantPlugins
@os_type = 'hvm'
# Get path to domain image from the storage pool selected.
actual_volumes =
env[:machine].provider.driver.connection.volumes.all.select do |x|
x.pool_name == @storage_pool_name
end
domain_volume = ProviderLibvirt::Util::Collection.find_matching(
actual_volumes, "#{@name}.img")
raise Errors::DomainVolumeExists if domain_volume.nil?
@domain_volume_path = domain_volume.path
# Get path to domain image from the storage pool selected if we have a box.
if env[:machine].box
actual_volumes =
env[:machine].provider.driver.connection.volumes.all.select do |x|
x.pool_name == @storage_pool_name
end
domain_volume = ProviderLibvirt::Util::Collection.find_matching(
actual_volumes,"#{@name}.img")
raise Errors::DomainVolumeExists if domain_volume.nil?
@domain_volume_path = domain_volume.path
end
# If we have a box, take the path from the domain volume and set our storage_prefix.
# If not, we dump the storage pool xml to get its defined path.
# the default storage prefix is typically: /var/lib/libvirt/images/
storage_prefix = File.dirname(@domain_volume_path) + '/' # steal
if env[:machine].box
storage_prefix = File.dirname(@domain_volume_path) + '/' # steal
else
storage_pool = env[:machine].provider.driver.connection.client.lookup_storage_pool_by_name(@storage_pool_name)
raise Errors::NoStoragePool if storage_pool.nil?
xml = Nokogiri::XML(storage_pool.xml_desc)
storage_prefix = xml.xpath("/pool/target/path").inner_text.to_s + '/'
end
@disks.each do |disk|
disk[:path] ||= _disk_name(@name, disk)
@ -125,7 +136,9 @@ module VagrantPlugins
env[:ui].info(" -- Cpus: #{@cpus}")
env[:ui].info(" -- Memory: #{@memory_size / 1024}M")
env[:ui].info(" -- Loader: #{@loader}")
env[:ui].info(" -- Base box: #{env[:machine].box.name}")
if env[:machine].box
env[:ui].info(" -- Base box: #{env[:machine].box.name}")
end
env[:ui].info(" -- Storage pool: #{@storage_pool_name}")
env[:ui].info(" -- Image: #{@domain_volume_path} (#{env[:box_virtual_size]}G)")
env[:ui].info(" -- Volume Cache: #{@domain_volume_cache}")

View File

@ -20,7 +20,6 @@ module VagrantPlugins
config = env[:machine].provider_config
@nic_model_type = config.nic_model_type
@nic_adapter_count = config.nic_adapter_count
@boot_order = config.boot_order
@app = app
end
@ -126,43 +125,46 @@ module VagrantPlugins
# Continue the middleware chain.
@app.call(env)
# Configure interfaces that user requested. Machine should be up and
# running now.
networks_to_configure = []
adapters.each_with_index do |options, slot_number|
# Skip configuring the management network, which is on the first interface.
# It's used for provisioning and it has to be available during provisioning,
# ifdown command is not acceptable here.
next if slot_number == 0
next if options[:auto_config] === false
@logger.debug "Configuring interface slot_number #{slot_number} options #{options}"
network = {
:interface => slot_number,
:use_dhcp_assigned_default_route => options[:use_dhcp_assigned_default_route],
:mac_address => options[:mac],
}
if options[:ip]
if env[:machine].box
# Configure interfaces that user requested. Machine should be up and
# running now.
networks_to_configure = []
adapters.each_with_index do |options, slot_number|
# Skip configuring the management network, which is on the first interface.
# It's used for provisioning and it has to be available during provisioning,
# ifdown command is not acceptable here.
next if slot_number == 0
next if options[:auto_config] === false
@logger.debug "Configuring interface slot_number #{slot_number} options #{options}"
network = {
:type => :static,
:ip => options[:ip],
:netmask => options[:netmask],
}.merge(network)
else
network[:type] = :dhcp
:interface => slot_number,
:use_dhcp_assigned_default_route => options[:use_dhcp_assigned_default_route],
:mac_address => options[:mac],
}
if options[:ip]
network = {
:type => :static,
:ip => options[:ip],
:netmask => options[:netmask],
}.merge(network)
else
network[:type] = :dhcp
end
# do not run configure_networks for tcp tunnel interfaces
next if options.fetch(:tcp_tunnel_type, nil)
networks_to_configure << network
end
# do not run configure_networks for tcp tunnel interfaces
next if options.fetch(:tcp_tunnel_type, nil)
networks_to_configure << network
env[:ui].info I18n.t('vagrant.actions.vm.network.configuring')
env[:machine].guest.capability(
:configure_networks, networks_to_configure)
end
env[:ui].info I18n.t('vagrant.actions.vm.network.configuring')
env[:machine].guest.capability(
:configure_networks, networks_to_configure)
end
private

View File

@ -0,0 +1,66 @@
require "log4r"
require 'nokogiri'
module VagrantPlugins
module ProviderLibvirt
module Action
class SetBootOrder
def initialize(app, env)
@app = app
@logger = Log4r::Logger.new("vagrant_libvirt::action::set_boot_order")
config = env[:machine].provider_config
@boot_order = config.boot_order
end
def call(env)
# Get domain first
begin
domain = env[:machine].provider.driver.connection.client.lookup_domain_by_uuid(
env[:machine].id.to_s)
rescue => e
raise Errors::NoDomainError,
:error_message => e.message
end
# Only execute specific boot ordering if this is defined in the Vagrant file
if @boot_order.count >= 1
# If a domain is initially defined with no box or disk or with an explicit boot order, libvirt adds <boot dev="foo">
# This conflicts with an explicit boot_order configuration, so we need to remove it from the domain xml and feed it back.
# Also see https://bugzilla.redhat.com/show_bug.cgi?id=1248514 as to why we have to do this after all devices have been defined.
xml = Nokogiri::XML(domain.xml_desc)
xml.search("/domain/os/boot").each do |node|
node.remove
end
# Parse the XML and find each defined drive and network interfacee
hd = xml.search("/domain/devices/disk[@device='disk']")
cdrom = xml.search("/domain/devices/disk[@device='cdrom']")
network = xml.search("/domain/devices/interface[@type='network']")
# Generate an array per device group and a flattened array from all of those
devices = {"hd" => hd, "cdrom" => cdrom, "network" => network}
final_boot_order = @boot_order.flat_map {|category| devices[category] }
# Loop over the entire defined boot order array and create boot order entries in the domain XML
final_boot_order.each_with_index do |node, index|
boot = "<boot order='#{index+1}'/>"
node.add_child(boot)
if node.name == 'disk'
@logger.debug "Setting #{node['device']} to boot index #{index+1}"
elsif node.name == 'interface'
@logger.debug "Setting #{node.name} to boot index #{index+1}"
end
end
# Finally redefine the domain XML through libvirt to apply the boot ordering
env[:machine].provider.driver.connection.client.define_domain_xml(xml.to_s)
end
@app.call(env)
end
end
end
end
end

View File

@ -23,7 +23,7 @@ module VagrantPlugins
Config
end
provider('libvirt', parallel: true) do
provider('libvirt', parallel: true, box_optional: true) do
require_relative 'provider'
Provider
end

View File

@ -44,17 +44,14 @@
</features>
<clock offset='utc'/>
<devices>
<% if @domain_volume_path %>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='<%= @domain_volume_cache %>'/>
<source file='<%= @domain_volume_path %>'/>
<%# we need to ensure a unique target dev -%>
<target dev='vda' bus='<%= @disk_bus %>'/>
<% if @boot_order[0] == 'hd' %>
<boot order='1'/>
<% elsif @boot_order.count >= 1 %>
<boot order='9'/>
<% end %>
</disk>
<% end %>
<%# additional disks -%>
<% @disks.each do |d| -%>
<disk type='file' device='disk'>

View File

@ -6,10 +6,5 @@
<target dev='vnet<%= @iface_number %>'/>
<alias name='net<%= @iface_number %>'/>
<model type='<%=@model_type%>'/>
<% if @boot_order[0] == 'network' %>
<boot order='<%= @iface_number+1 %>'/>
<% elsif @boot_order.include?('network') %>
<boot order='<%= @iface_number+2 %>'/>
<% end %>
</interface>