Merge branch 'master' into patch-1

This commit is contained in:
Darragh Bailey 2020-03-27 18:52:02 +00:00 committed by GitHub
commit 386f7ff4fc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 330 additions and 273 deletions

View File

@ -1,24 +1,35 @@
---
language: ruby
dist: trusty
dist: xenial
before_install:
- sudo apt-get update -qq
- sudo apt-get install -y libvirt-dev
- gem update --system
- gem install bundler
- gem update --system --conservative || (gem i "rubygems-update:~>2.7" --no-document && update_rubygems)
- gem update bundler --conservative
addons:
apt:
packages: libvirt-dev
update: true
install: bundle install
script: bundle exec rspec --color --format documentation
notifications:
email: false
rvm:
- 2.2.5
- 2.3.3
env:
global:
- NOKOGIRI_USE_SYSTEM_LIBRARIES=true
matrix:
- VAGRANT_VERSION=v2.0.1
- VAGRANT_VERSION=v2.0.4
- VAGRANT_VERSION=v2.1.5
- VAGRANT_VERSION=v2.2.3
rvm:
- 2.2.7
- 2.3.4
- 2.4.1
- 2.6.1
matrix:
allow_failures:
- env: VAGRANT_VERSION=master
rvm: 2.3.3
exclude:
- env: VAGRANT_VERSION=v2.0.4
rvm: 2.6.1
- env: VAGRANT_VERSION=v2.1.5
rvm: 2.6.1

221
README.md
View File

@ -4,7 +4,7 @@
[![Build Status](https://travis-ci.org/vagrant-libvirt/vagrant-libvirt.svg)](https://travis-ci.org/vagrant-libvirt/vagrant-libvirt)
[![Coverage Status](https://coveralls.io/repos/github/vagrant-libvirt/vagrant-libvirt/badge.svg?branch=master)](https://coveralls.io/github/vagrant-libvirt/vagrant-libvirt?branch=master)
This is a [Vagrant](http://www.vagrantup.com) plugin that adds an
This is a [Vagrant](http://www.vagrantup.com) plugin that adds a
[Libvirt](http://libvirt.org) provider to Vagrant, allowing Vagrant to
control and provision machines via Libvirt toolkit.
@ -53,6 +53,8 @@ can help a lot :-)
- [Customized Graphics](#customized-graphics)
- [Box Format](#box-format)
- [Create Box](#create-box)
- [Package Box from VM](#package-box-from-vm)
- [Troubleshooting VMs](#troubleshooting-vms)
- [Development](#development)
- [Contributing](#contributing)
@ -83,27 +85,39 @@ can help a lot :-)
## Installation
First, you should have both qemu and libvirt installed if you plan to run VMs
on your local system. For instructions, refer to your linux distribution's
First, you should have both QEMU and Libvirt installed if you plan to run VMs
on your local system. For instructions, refer to your Linux distribution's
documentation.
**NOTE:** Before you start using Vagrant-libvirt, please make sure your libvirt
and qemu installation is working correctly and you are able to create qemu or
kvm type virtual machines with `virsh` or `virt-manager`.
**NOTE:** Before you start using vagrant-libvirt, please make sure your Libvirt
and QEMU installation is working correctly and you are able to create QEMU or
KVM type virtual machines with `virsh` or `virt-manager`.
Next, you must have [Vagrant
installed](http://docs.vagrantup.com/v2/installation/index.html).
Vagrant-libvirt supports Vagrant 1.5, 1.6, 1.7 and 1.8.
*We only test with the upstream version!* If you decide to install your distros
Vagrant-libvirt supports Vagrant 2.0, 2.1 & 2.2. It should also work with earlier
releases from 1.5 onwards but they are not actively tested.
Check the [.travis.yml](https://github.com/vagrant-libvirt/vagrant-libvirt/blob/master/.travis.yml)
for the current list of tested versions.
*We only test with the upstream version!* If you decide to install your distro's
version and you run into problems, as a first step you should switch to upstream.
Now you need to make sure your have all the build dependencies installed for
vagrant-libvirt. This depends on your distro. An overview:
* Ubuntu 12.04/14.04/16.04, Debian:
* Ubuntu 18.10, Debian 9 and up:
```shell
apt-get build-dep vagrant ruby-libvirt
apt-get install qemu libvirt-bin ebtables dnsmasq
apt-get install qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base
apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
```
* Ubuntu 18.04, Debian 8 and older:
```shell
apt-get build-dep vagrant ruby-libvirt
apt-get install qemu libvirt-bin ebtables dnsmasq-base
apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
```
@ -119,7 +133,12 @@ yum install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm
dnf -y install qemu libvirt libvirt-devel ruby-devel gcc
```
* Arch linux: please read the related [ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt) page.
* OpenSUSE leap 15.1:
```shell
zypper install qemu libvirt libvirt-devel ruby-devel gcc qemu-kvm
```
* Arch Linux: please read the related [ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt) page.
```shell
pacman -S vagrant
```
@ -147,7 +166,7 @@ $ sudo dnf install libxslt-devel libxml2-devel libvirt-devel \
libguestfs-tools-c ruby-devel gcc
```
On Arch linux it is recommended to follow [steps from ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt).
On Arch Linux it is recommended to follow [steps from ArchWiki](https://wiki.archlinux.org/index.php/Vagrant#vagrant-libvirt).
If have problem with installation - check your linker. It should be `ld.gold`:
@ -169,8 +188,8 @@ CONFIGURE_ARGS='with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/u
After installing the plugin (instructions above), the quickest way to get
started is to add Libvirt box and specify all the details manually within a
`config.vm.provider` block. So first, add Libvirt box using any name you want.
You can find more libvirt ready boxes at
[Atlas](https://atlas.hashicorp.com/boxes/search?provider=libvirt). For
You can find more Libvirt-ready boxes at
[Vagrant Cloud](https://app.vagrantup.com/boxes/search?provider=libvirt). For
example:
```shell
@ -210,7 +229,7 @@ export VAGRANT_DEFAULT_PROVIDER=libvirt
Vagrant goes through steps below when creating new project:
1. Connect to Libvirt localy or remotely via SSH.
1. Connect to Libvirt locally or remotely via SSH.
2. Check if box image is available in Libvirt storage pool. If not, upload it
to remote Libvirt storage pool as new volume.
3. Create COW diff image of base box image for new Libvirt domain.
@ -227,22 +246,22 @@ Vagrant goes through steps below when creating new project:
Although it should work without any configuration for most people, this
provider exposes quite a few provider-specific configuration options. The
following options allow you to configure how vagrant-libvirt connects to
libvirt, and are used to generate the [libvirt connection
Libvirt, and are used to generate the [Libvirt connection
URI](http://libvirt.org/uri.html):
* `driver` - A hypervisor name to access. For now only kvm and qemu are
* `driver` - A hypervisor name to access. For now only KVM and QEMU are
supported
* `host` - The name of the server, where libvirtd is running
* `host` - The name of the server, where Libvirtd is running
* `connect_via_ssh` - If use ssh tunnel to connect to Libvirt. Absolutely
needed to access libvirt on remote host. It will not be able to get the IP
needed to access Libvirt on remote host. It will not be able to get the IP
address of a started VM otherwise.
* `username` - Username and password to access Libvirt
* `password` - Password to access Libvirt
* `id_ssh_key_file` - If not nil, uses this ssh private key to access Libvirt.
Default is `$HOME/.ssh/id_rsa`. Prepends `$HOME/.ssh/` if no directory
* `socket` - Path to the libvirt unix socket (e.g.
* `socket` - Path to the Libvirt unix socket (e.g.
`/var/run/libvirt/libvirt-sock`)
* `uri` - For advanced usage. Directly specifies what libvirt connection URI
* `uri` - For advanced usage. Directly specifies what Libvirt connection URI
vagrant-libvirt should use. Overrides all other connection configuration
options
@ -264,7 +283,7 @@ end
### Domain Specific Options
* `disk_bus` - The type of disk device to emulate. Defaults to virtio if not
set. Possible values are documented in libvirt's [description for
set. Possible values are documented in Libvirt's [description for
_target_](http://libvirt.org/formatdomain.html#elementsDisks). NOTE: this
option applies only to disks associated with a box image. To set the bus type
on additional disks, see the [Additional Disks](#additional-disks) section.
@ -275,7 +294,7 @@ end
* `nic_model_type` - parameter specifies the model of the network adapter when
you create a domain value by default virtio KVM believe possible values, see
the [documentation for
libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
Libvirt](https://libvirt.org/formatdomain.html#elementsNICSModel).
* `memory` - Amount of memory in MBytes. Defaults to 512 if not set.
* `cpus` - Number of virtual cpus. Defaults to 1 if not set.
* `cputopology` - Number of CPU sockets, cores and threads running per core. All fields of `:sockets`, `:cores` and `:threads` are mandatory, `cpus` domain option must be present and must be equal to total count of **sockets * cores * threads**. For more details see [documentation](https://libvirt.org/formatdomain.html#elementsCPU).
@ -299,7 +318,7 @@ end
* `cpu_model` - CPU Model. Defaults to 'qemu64' if not set and `cpu_mode` is
`custom` and to '' otherwise. This can really only be used when setting
`cpu_mode` to `custom`.
* `cpu_fallback` - Whether to allow libvirt to fall back to a CPU model close
* `cpu_fallback` - Whether to allow Libvirt to fall back to a CPU model close
to the specified model if features in the guest CPU are not supported on the
host. Defaults to 'allow' if not set. Allowed values: `allow`, `forbid`.
* `numa_nodes` - Specify an array of NUMA nodes for the guest. The syntax is similar to what would be set in the domain XML. `memory` must be in MB. Symmetrical and asymmetrical topologies are supported but make sure your total count of defined CPUs adds up to `v.cpus`.
@ -315,7 +334,7 @@ end
* `loader` - Sets path to custom UEFI loader.
* `volume_cache` - Controls the cache mechanism. Possible values are "default",
"none", "writethrough", "writeback", "directsync" and "unsafe". [See
driver->cache in libvirt
driver->cache in Libvirt
documentation](http://libvirt.org/formatdomain.html#elementsDisks).
* `kernel` - To launch the guest with a kernel residing on host filesystems.
Equivalent to qemu `-kernel`.
@ -323,6 +342,9 @@ end
to qemu `-initrd`.
* `random_hostname` - To create a domain name with extra information on the end
to prevent hostname conflicts.
* `default_prefix` - The default Libvirt guest name becomes a concatenation of the
`<current_directory>_<guest_name>`. The current working directory is the default prefix
to the guest name. The `default_prefix` options allow you to set the guest name prefix.
* `cmd_line` - Arguments passed on to the guest kernel initramfs or initrd to
use. Equivalent to qemu `-append`, only possible to use in combination with `initrd` and `kernel`.
* `graphics_type` - Sets the protocol used to expose the guest display.
@ -333,8 +355,8 @@ end
* `graphics_ip` - Sets the IP for the display protocol to bind to. Defaults to
"127.0.0.1".
* `graphics_passwd` - Sets the password for the display protocol. Working for
vnc and spice. by default working without passsword.
* `graphics_autoport` - Sets autoport for graphics, libvirt in this case
vnc and Spice. by default working without passsword.
* `graphics_autoport` - Sets autoport for graphics, Libvirt in this case
ignores graphics_port value, Defaults to 'yes'. Possible value are "yes" and
"no"
* `keymap` - Set keymap for vm. default: en-us
@ -351,8 +373,8 @@ end
Defaults to "ich6".
* `machine_type` - Sets machine type. Equivalent to qemu `-machine`. Use
`qemu-system-x86_64 -machine help` to get a list of supported machines.
* `machine_arch` - Sets machine architecture. This helps libvirt to determine
the correct emulator type. Possible values depend on your version of qemu.
* `machine_arch` - Sets machine architecture. This helps Libvirt to determine
the correct emulator type. Possible values depend on your version of QEMU.
For possible values, see which emulator executable `qemu-system-*` your
system provides. Common examples are `aarch64`, `alpha`, `arm`, `cris`,
`i386`, `lm32`, `m68k`, `microblaze`, `microblazeel`, `mips`, `mips64`,
@ -376,7 +398,7 @@ end
* `nic_adapter_count` - Defaults to '8'. Only use case for increasing this
count is for VMs that virtualize switches such as Cumulus Linux. Max value
for Cumulus Linux VMs is 33.
* `uuid` - Force a domain UUID. Defaults to autogenerated value by libvirt if
* `uuid` - Force a domain UUID. Defaults to autogenerated value by Libvirt if
not set.
* `suspend_mode` - What is done on vagrant suspend. Possible values: 'pause',
'managedsave'. Pause mode executes a la `virsh suspend`, which just pauses
@ -390,10 +412,10 @@ end
specified here.
* `autostart` - Automatically start the domain when the host boots. Defaults to
'false'.
* `channel` - [libvirt
* `channel` - [Libvirt
channels](https://libvirt.org/formatdomain.html#elementCharChannel).
Configure a private communication channel between the host and guest, e.g.
for use by the [qemu guest
for use by the [QEMU guest
agent](http://wiki.libvirt.org/page/Qemu_guest_agent) and the Spice/QXL
graphics type.
* `mgmt_attach` - Decide if VM has interface in mgmt network. If set to 'false'
@ -474,11 +496,11 @@ https://libvirt.org/formatdomain.html#elementsNICSTCP
http://libvirt.org/formatdomain.html#elementsNICSMulticast
http://libvirt.org/formatdomain.html#elementsNICSUDP _(in libvirt v1.2.20 and higher)_
http://libvirt.org/formatdomain.html#elementsNICSUDP _(in Libvirt v1.2.20 and higher)_
Public Network interfaces are currently implemented using the macvtap driver.
The macvtap driver is only available with the Linux Kernel version >= 2.6.24.
See the following libvirt documentation for the details of the macvtap usage.
See the following Libvirt documentation for the details of the macvtap usage.
http://www.libvirt.org/formatdomain.html#elementsNICSDirect
@ -547,7 +569,7 @@ In example below, one network interface is configured for VM `test_vm1`. After
you run `vagrant up`, VM will be accessible on IP address `10.20.30.40`. So if
you install a web server via provisioner, you will be able to access your
testing server on `http://10.20.30.40` URL. But beware that this address is
private to libvirt host only. It's not visible outside of the hypervisor box.
private to Libvirt host only. It's not visible outside of the hypervisor box.
If network `10.20.30.0/24` doesn't exist, provider will create it. By default
created networks are NATed to outside world, so your VM will be able to connect
@ -564,11 +586,11 @@ reachable by anyone with access to the public network.
*Note: These options are not applicable to public network interfaces.*
There is a way to pass specific options for libvirt provider when using
There is a way to pass specific options for Libvirt provider when using
`config.vm.network` to configure new network interface. Each parameter name
starts with `libvirt__` string. Here is a list of those options:
* `:libvirt__network_name` - Name of libvirt network to connect to. By default,
* `:libvirt__network_name` - Name of Libvirt network to connect to. By default,
network 'default' is used.
* `:libvirt__netmask` - Used only together with `:ip` option. Default is
'255.255.255.0'.
@ -607,7 +629,7 @@ starts with `libvirt__` string. Here is a list of those options:
between Guests. Useful for Switch VMs like Cumulus Linux. No virtual switch
setting like `libvirt__network_name` applies with tunnel interfaces and will
be ignored if configured.
* `:libvirt__tunnel_ip` - Sets the source IP of the libvirt tunnel interface.
* `:libvirt__tunnel_ip` - Sets the source IP of the Libvirt tunnel interface.
By default this is `127.0.0.1` for TCP and UDP tunnels and `239.255.1.1` for
Multicast tunnels. It populates the address field in the `<source
address="XXX">` of the interface xml configuration.
@ -617,11 +639,11 @@ starts with `libvirt__` string. Here is a list of those options:
* `:libvirt__tunnel_local_port` - Sets the local port used by the udp tunnel
interface type. It populates the port field in the `<local port=XXX">`
section of the interface xml configuration. _(This feature only works in
libvirt 1.2.20 and higher)_
Libvirt 1.2.20 and higher)_
* `:libvirt__tunnel_local_ip` - Sets the local IP used by the udp tunnel
interface type. It populates the ip entry of the `<local address=XXX">`
section of the interface xml configuration. _(This feature only works in
libvirt 1.2.20 and higher)_
Libvirt 1.2.20 and higher)_
* `:libvirt__guest_ipv6` - Enable or disable guest-to-guest IPv6 communication.
See [here](https://libvirt.org/formatnetwork.html#examplesPrivate6), and
[here](http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=705e67d40b09a905cd6a4b8b418d5cb94eaa95a8)
@ -633,18 +655,18 @@ starts with `libvirt__` string. Here is a list of those options:
failures](https://github.com/vagrant-libvirt/vagrant-libvirt/pull/498)
* `:mac` - MAC address for the interface. *Note: specify this in lowercase
since Vagrant network scripts assume it will be!*
* `:libvirt__mtu` - MTU size for the libvirt network, if not defined, the
created network will use the libvirt default (1500). VMs still need to set the
* `:libvirt__mtu` - MTU size for the Libvirt network, if not defined, the
created network will use the Libvirt default (1500). VMs still need to set the
MTU accordingly.
* `:model_type` - parameter specifies the model of the network adapter when you
create a domain value by default virtio KVM believe possible values, see the
documentation for libvirt
documentation for Libvirt
* `:libvirt__driver_name` - Define which network driver to use. [More
info](https://libvirt.org/formatdomain.html#elementsDriverBackendOptions)
* `:libvirt__driver_queues` - Define a number of queues to be used for network
interface. Set equal to numer of vCPUs for best performance. [More
info](http://www.linux-kvm.org/page/Multiqueue)
* `:autostart` - Automatic startup of network by the libvirt daemon.
* `:autostart` - Automatic startup of network by the Libvirt daemon.
If not specified the default is 'false'.
* `:bus` - The bus of the PCI device. Both :bus and :slot have to be defined.
* `:slot` - The slot of the PCI device. Both :bus and :slot have to be defined.
@ -665,8 +687,8 @@ virtual network.
Default mode is 'bridge'.
* `:type` - is type of interface.(`<interface type="#{@type}">`)
* `:mac` - MAC address for the interface.
* `:network_name` - Name of libvirt network to connect to.
* `:portgroup` - Name of libvirt portgroup to connect to.
* `:network_name` - Name of Libvirt network to connect to.
* `:portgroup` - Name of Libvirt portgroup to connect to.
* `:ovs` - Support to connect to an Open vSwitch bridge device. Default is
'false'.
* `:trust_guest_rx_filters` - Support trustGuestRxFilters attribute. Details
@ -677,17 +699,17 @@ virtual network.
vagrant-libvirt uses a private network to perform some management operations on
VMs. All VMs will have an interface connected to this network and an IP address
dynamically assigned by libvirt unless you set `:mgmt_attach` to 'false'.
dynamically assigned by Libvirt unless you set `:mgmt_attach` to 'false'.
This is in addition to any networks you configure. The name and address
used by this network are configurable at the provider level.
* `management_network_name` - Name of libvirt network to which all VMs will be
* `management_network_name` - Name of Libvirt network to which all VMs will be
connected. If not specified the default is 'vagrant-libvirt'.
* `management_network_address` - Address of network to which all VMs will be
connected. Must include the address and subnet mask. If not specified the
default is '192.168.121.0/24'.
* `management_network_mode` - Network mode for the libvirt management network.
Specify one of veryisolated, none, nat or route options. Further documentated
* `management_network_mode` - Network mode for the Libvirt management network.
Specify one of veryisolated, none, nat or route options. Further documented
under [Private Networks](#private-network-options)
* `management_network_guest_ipv6` - Enable or disable guest-to-guest IPv6
communication. See
@ -696,9 +718,10 @@ used by this network are configurable at the provider level.
for for more information.
* `management_network_autostart` - Automatic startup of mgmt network, if not
specified the default is 'false'.
* `:management_network_pci_bus` - The bus of the PCI device.
* `:management_network_pci_slot` - The slot of the PCI device.
* `management_network_pci_bus` - The bus of the PCI device.
* `management_network_pci_slot` - The slot of the PCI device.
* `management_network_mac` - MAC address of management network interface.
* `management_network_domain` - Domain name assigned to the management network.
You may wonder how vagrant-libvirt knows the IP address a VM received. Libvirt
doesn't provide a standard way to find out the IP address of a running domain.
@ -889,8 +912,8 @@ Bus 001 Device 002: ID 1234:abcd Example device
Additionally, the following options can be used:
* `startupPolicy` - Is passed through to libvirt and controls if the device has
to exist. libvirt currently allows the following values: "mandatory",
* `startupPolicy` - Is passed through to Libvirt and controls if the device has
to exist. Libvirt currently allows the following values: "mandatory",
"requisite", "optional".
@ -984,7 +1007,7 @@ The optional action attribute describes what `action` to take when the watchdog
```ruby
Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |libvirt|
# Add libvirt watchdog device model i6300esb
# Add Libvirt watchdog device model i6300esb
libvirt.watchdog :model => 'i6300esb', :action => 'reset'
end
end
@ -1044,7 +1067,7 @@ running Microsoft Windows.
You can specify HyperV features via `libvirt.hyperv_feature`. Available
options are listed below. Note that both options are required:
* `name` - The name of the feature Hypervisor feature (see libvirt doc)
* `name` - The name of the feature Hypervisor feature (see Libvirt doc)
* `state` - The state for this feature which can be either `on` or `off`.
```ruby
@ -1063,10 +1086,10 @@ end
You can specify CPU feature policies via `libvirt.cpu_feature`. Available
options are listed below. Note that both options are required:
* `name` - The name of the feature for the chosen CPU (see libvirts
* `name` - The name of the feature for the chosen CPU (see Libvirt's
`cpu_map.xml`)
* `policy` - The policy for this feature (one of `force`, `require`,
`optional`, `disable` and `forbid` - see libvirt documentation)
`optional`, `disable` and `forbid` - see Libvirt documentation)
```ruby
Vagrant.configure("2") do |config|
@ -1221,12 +1244,19 @@ mounting them at boot.
Further documentation on using 9p can be found in [kernel docs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt) and in [QEMU wiki](https://wiki.qemu.org/Documentation/9psetup#Starting_the_Guest_directly). Please do note that 9p depends on support in the guest and not all distros come with the 9p module by default.
**SECURITY NOTE:** for remote libvirt, nfs synced folders requires a bridged
public network interface and you must connect to libvirt via ssh.
**SECURITY NOTE:** for remote Libvirt, nfs synced folders requires a bridged
public network interface and you must connect to Libvirt via ssh.
## QEMU Session Support
vagrant-libvirt supports using the QEMU session connection to maintain Vagrant VMs. As the session connection does not have root access to the system features which require root will not work. Access to networks created by the system QEMU connection can be granted by using the [QEMU bridge helper](https://wiki.qemu.org/Features/HelperNetworking). The bridge helper is enabled by default on some distros but may need to be enabled/installed on others.
vagrant-libvirt supports using QEMU user sessions to maintain Vagrant VMs. As the session connection does not have root access to the system features which require root will not work. Access to networks created by the system QEMU connection can be granted by using the [QEMU bridge helper](https://wiki.qemu.org/Features/HelperNetworking). The bridge helper is enabled by default on some distros but may need to be enabled/installed on others.
There must be a virbr network defined in the QEMU system session. The libvirt `default` network which comes by default, the vagrant `vagrant-libvirt` network which is generated if you run a Vagrantfile using the System session, or a manually defined network can be used. These networks can be set to autostart with `sudo virsh net-autostart <net-name>`, which'll mean no further root access is required even after reboots.
The QEMU bridge helper is configured via `/etc/qemu/bridge.conf`. This file must include the virbr you wish to use (e.g. virbr0, virbr1, etc). You can find this out via `sudo virsh net-dumpxml <net-name>`.
```
allow virbr0
```
An example configuration of a machine using the QEMU session connection:
@ -1237,11 +1267,11 @@ Vagrant.configure("2") do |config|
libvirt.qemu_use_session = true
# URI of QEMU session connection, default is as below
libvirt.uri = 'qemu:///session'
# URI of QEMU system connection, use to obtain IP address for management
# URI of QEMU system connection, use to obtain IP address for management, default is below
libvirt.system_uri = 'qemu:///system'
# Path to store libvirt images for the virtual machine, default is as ~/.local/share/libvirt/images
# Path to store Libvirt images for the virtual machine, default is as ~/.local/share/libvirt/images
libvirt.storage_pool_path = '/home/user/.local/share/libvirt/images'
# Management network device
# Management network device, default is below
libvirt.management_network_device = 'virbr0'
end
@ -1302,7 +1332,7 @@ end
For certain functionality to be available within a guest, a private
communication channel must be established with the host. Two notable examples
of this are the qemu guest agent, and the Spice/QXL graphics type.
of this are the QEMU guest agent, and the Spice/QXL graphics type.
Below is a simple example which exposes a virtio serial channel to the guest.
Note: in a multi-VM environment, the channel would be created for all VMs.
@ -1328,7 +1358,7 @@ end
These settings can be specified on a per-VM basis, however the per-guest
settings will OVERRIDE any global 'config' setting. In the following example,
we create 3 VM with the following configuration:
we create 3 VMs with the following configuration:
* **master**: No channel settings specified, so we default to the provider
setting of a single virtio guest agent channel.
@ -1416,6 +1446,65 @@ $ cd packer-qemu-templates
$ packer build ubuntu-14.04-server-amd64-vagrant.json
```
## Package Box from VM
vagrant-libvirt has native support for [`vagrant
package`](https://www.vagrantup.com/docs/cli/package.html) via
libguestfs [virt-sysprep](http://libguestfs.org/virt-sysprep.1.html).
virt-sysprep operations can be customized via the
`VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS` environment variable; see the
[upstream
documentation](http://libguestfs.org/virt-sysprep.1.html#operations) for
further details especially on default sysprep operations enabled for
your system.
For example, on Chef [bento](https://github.com/chef/bento) VMs that
require SSH hostkeys already set (e.g. bento/debian-7) as well as leave
existing LVM UUIDs untouched (e.g. bento/ubuntu-18.04), these can be
packaged into vagrant-libvirt boxes like so:
```shell
$ export VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS="defaults,-ssh-userdir,-ssh-hostkeys,-lvm-uuids"
$ vagrant package
```
## Troubleshooting VMs
The first step for troubleshooting a VM image that appears to not boot correctly,
or hangs waiting to get an IP, is to check it with a VNC viewer. A key thing
to remember is that if the VM doesn't get an IP, then vagrant can't communicate
with it to configure anything, so a problem at this stage is likely to come from
the VM, but we'll outline the tools and common problems to help you troubleshoot
that.
By default, when you create a new VM, a vnc server will listen on `127.0.0.1` on
port `TCP5900`. If you connect with a vnc viewer you can see the boot process. If
your VM isn't listening on `5900` by default, you can use `virsh dumpxml` to find
out which port it's listening on, or can configure it with `graphics_port` and
`graphics_ip` (see 'Domain Specific Options' above).
Note: Connecting with the console (`virsh console`) requires additional config,
so some VMs may not show anything on the console at all, instead displaying it in
the VNC console. The issue with the text console is that you also need to build the
image used to tell the kernel to output to the console during boot, and typically
most do not have this built in.
Problems we've seen in the past include:
- Forgetting to remove `/etc/udev/rules.d/70-persistent-net.rules` before packaging
the VM
- VMs expecting a specific disk device to be connected
If you're still confused, check the Github Issues for this repo for anything that
looks similar to your problem.
[Github Issue #1032](https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1032)
contains some historical troubleshooting for VMs that appeared
to hang.
Did you hit a problem that you'd like to note here to save time in the future?
Please do!
## Development
To work on the `vagrant-libvirt` plugin, clone this repository out, and use

View File

@ -2,7 +2,7 @@
Vagrant providers each require a custom provider-specific box format.
This folder shows the example contents of a box for the `libvirt` provider.
To turn this into a box create a vagrant image according documentation (don't
To turn this into a box create a Vagrant image according documentation (don't
forget to install rsync command) and create box with following command:
```

View File

@ -27,18 +27,18 @@ Vagrant.configure("2") do |config|
#test_vm.vm.network :public_network, :ip => '10.20.30.41'
#end
# Options for libvirt vagrant provider.
# Options for Libvirt Vagrant provider.
config.vm.provider :libvirt do |libvirt|
# A hypervisor name to access. Different drivers can be specified, but
# this version of provider creates KVM machines only. Some examples of
# drivers are kvm (qemu hardware accelerated), qemu (qemu emulated),
# xen (Xen hypervisor), lxc (Linux Containers),
# drivers are KVM (QEMU hardware accelerated), QEMU (QEMU emulated),
# Xen (Xen hypervisor), lxc (Linux Containers),
# esx (VMware ESX), vmwarews (VMware Workstation) and more. Refer to
# documentation for available drivers (http://libvirt.org/drivers.html).
libvirt.driver = "kvm"
# The name of the server, where libvirtd is running.
# The name of the server, where Libvirtd is running.
# libvirt.host = "localhost"
# If use ssh tunnel to connect to Libvirt.

View File

@ -8,7 +8,7 @@ module VagrantPlugins
include Vagrant::Action::Builtin
@logger = Log4r::Logger.new('vagrant_libvirt::action')
# remove image from libvirt storage pool
# remove image from Libvirt storage pool
def self.remove_libvirt_image
Vagrant::Action::Builder.new.tap do |b|
b.use RemoveLibvirtImage

View File

@ -81,6 +81,7 @@ module VagrantPlugins
# Storage
@storage_pool_name = config.storage_pool_name
@snapshot_pool_name = config.snapshot_pool_name
@disks = config.disks
@cdroms = config.cdroms
@ -119,9 +120,15 @@ module VagrantPlugins
# Get path to domain image from the storage pool selected if we have a box.
if env[:machine].config.vm.box
if @snapshot_pool_name != @storage_pool_name
pool_name = @snapshot_pool_name
else
pool_name = @storage_pool_name
end
@logger.debug "Search for volume in pool: #{pool_name}"
actual_volumes =
env[:machine].provider.driver.connection.volumes.all.select do |x|
x.pool_name == @storage_pool_name
x.pool_name == pool_name
end
domain_volume = ProviderLibvirt::Util::Collection.find_matching(
actual_volumes, "#{@name}.img"
@ -155,9 +162,6 @@ module VagrantPlugins
disk[:absolute_path] = storage_prefix + disk[:path]
if env[:machine].provider.driver.connection.volumes.select do |x|
x.name == disk[:name] && x.pool_name == @storage_pool_name
end.empty?
# make the disk. equivalent to:
# qemu-img create -f qcow2 <path> 5g
begin
@ -169,12 +173,17 @@ module VagrantPlugins
#:allocation => ?,
pool_name: @storage_pool_name
)
rescue Fog::Errors::Error => e
rescue Libvirt::Error => e
# It is hard to believe that e contains just a string
# and no useful error code!
msg = "Call to virStorageVolCreateXML failed: " +
"storage volume '#{disk[:path]}' exists already"
if e.message == msg and disk[:allow_existing]
disk[:preexisting] = true
else
raise Errors::FogDomainVolumeCreateError,
error_message: e.message
end
else
disk[:preexisting] = true
end
end
@ -316,7 +325,7 @@ module VagrantPlugins
env[:ui].info(" -- Command line : #{@cmd_line}") unless @cmd_line.empty?
# Create libvirt domain.
# Create Libvirt domain.
# Is there a way to tell fog to create new domain with already
# existing volume? Use domain creation from template..
begin

View File

@ -71,9 +71,15 @@ module VagrantPlugins
Nokogiri::XML::Node::SaveOptions::NO_EMPTY_TAGS |
Nokogiri::XML::Node::SaveOptions::FORMAT
)
if config.snapshot_pool_name != config.storage_pool_name
pool_name = config.snapshot_pool_name
else
pool_name = config.storage_pool_name
end
@logger.debug "Using pool #{pool_name} for base box snapshot"
domain_volume = env[:machine].provider.driver.connection.volumes.create(
xml: xml,
pool_name: config.storage_pool_name
pool_name: pool_name
)
rescue Fog::Errors::Error => e
raise Errors::FogDomainVolumeCreateError,

View File

@ -219,10 +219,12 @@ module VagrantPlugins
networks_to_configure << network
end
unless networks_to_configure.empty?
env[:ui].info I18n.t('vagrant.actions.vm.network.configuring')
env[:machine].guest.capability(
:configure_networks, networks_to_configure
)
end
end
end
@ -281,7 +283,7 @@ module VagrantPlugins
return options[:network_name]
end
# Get list of all (active and inactive) libvirt networks.
# Get list of all (active and inactive) Libvirt networks.
available_networks = libvirt_networks(libvirt_client)
return 'public' if options[:iface_type] == :public_network

View File

@ -47,9 +47,9 @@ module VagrantPlugins
# should fix other methods so this doesn't have to be instance var
@options = options
# Get a list of all (active and inactive) libvirt networks. This
# Get a list of all (active and inactive) Libvirt networks. This
# list is used throughout this class and should be easier to
# process than libvirt API calls.
# process than Libvirt API calls.
@available_networks = libvirt_networks(
env[:machine].provider.driver.connection.client
)

View File

@ -14,7 +14,7 @@ module VagrantPlugins
env[:ui].info(I18n.t('vagrant_libvirt.destroy_domain'))
# Must delete any snapshots before domain can be destroyed
# Fog libvirt currently doesn't support snapshots. Use
# Fog Libvirt currently doesn't support snapshots. Use
# ruby-libvirt client directly. Note this is racy, see
# http://www.libvirt.org/html/libvirt-libvirt.html#virDomainSnapshotListNames
libvirt_domain = env[:machine].provider.driver.connection.client.lookup_domain_by_uuid(

View File

@ -54,7 +54,7 @@ module VagrantPlugins
ssh_pid = redirect_port(
@env[:machine],
fp[:host_ip] || 'localhost',
fp[:host_ip] || '*',
fp[:host],
fp[:guest_ip] || @env[:machine].provider.ssh_info[:host],
fp[:guest],
@ -97,6 +97,7 @@ module VagrantPlugins
User=#{ssh_info[:username]}
Port=#{ssh_info[:port]}
UserKnownHostsFile=/dev/null
ExitOnForwardFailure=yes
StrictHostKeyChecking=no
PasswordAuthentication=no
ForwardX11=#{ssh_info[:forward_x11] ? 'yes' : 'no'}
@ -210,9 +211,9 @@ module VagrantPlugins
end
def ssh_pid?(pid)
@logger.debug 'Checking if #{pid} is an ssh process '\
'with `ps -o cmd= #{pid}`'
`ps -o cmd= #{pid}`.strip.chomp =~ /ssh/
@logger.debug "Checking if #{pid} is an ssh process "\
"with `ps -o command= #{pid}`"
`ps -o command= #{pid}`.strip.chomp =~ /ssh/
end
def remove_ssh_pids

View File

@ -19,7 +19,7 @@ module VagrantPlugins
begin
env[:machine].guest.capability(:halt)
rescue
@logger.info('Trying libvirt graceful shutdown.')
@logger.info('Trying Libvirt graceful shutdown.')
domain.shutdown
end

View File

@ -4,7 +4,6 @@ module VagrantPlugins
module ProviderLibvirt
module Action
class HandleBoxImage
include VagrantPlugins::ProviderLibvirt::Util::ErbTemplate
include VagrantPlugins::ProviderLibvirt::Util::StorageUtil
@ -81,41 +80,20 @@ module VagrantPlugins
message << " in storage pool #{config.storage_pool_name}."
@logger.info(message)
if config.qemu_use_session
begin
@name = env[:box_volume_name]
@allocation = "#{box_image_size / 1024 / 1024}M"
@capacity = "#{box_virtual_size}G"
@format_type = box_format ? box_format : 'raw'
@storage_volume_uid = storage_uid env
@storage_volume_gid = storage_gid env
libvirt_client = env[:machine].provider.driver.connection.client
libvirt_pool = libvirt_client.lookup_storage_pool_by_name(
config.storage_pool_name
)
libvirt_volume = libvirt_pool.create_volume_xml(
to_xml('default_storage_volume')
)
rescue => e
raise Errors::CreatingVolumeError,
error_message: e.message
end
else
begin
fog_volume = env[:machine].provider.driver.connection.volumes.create(
name: env[:box_volume_name],
allocation: "#{box_image_size / 1024 / 1024}M",
capacity: "#{box_virtual_size}G",
format_type: box_format,
owner: storage_uid env,
group: storage_gid env,
pool_name: config.storage_pool_name
)
rescue Fog::Errors::Error => e
raise Errors::FogCreateVolumeError,
error_message: e.message
end
end
# Upload box image to storage pool
ret = upload_image(box_image_file, config.storage_pool_name,
@ -132,11 +110,7 @@ module VagrantPlugins
# storage pool.
if env[:interrupted] || !ret
begin
if config.qemu_use_session
libvirt_volume.delete
else
fog_volume.destroy
end
rescue
nil
end
@ -146,22 +120,9 @@ module VagrantPlugins
@app.call(env)
end
def split_size_unit(text)
if text.kind_of? Integer
# if text is an integer, match will fail
size = text
unit = 'G'
else
matcher = text.match(/(\d+)(.+)/)
size = matcher[1]
unit = matcher[2]
end
[size, unit]
end
protected
# Fog libvirt currently doesn't support uploading images to storage
# Fog Libvirt currently doesn't support uploading images to storage
# pool volumes. Use ruby-libvirt client instead.
def upload_image(image_file, pool_name, volume_name, env)
image_size = File.size(image_file) # B

View File

@ -36,7 +36,7 @@ module VagrantPlugins
@logger.info("Creating storage pool 'default'")
# Fog libvirt currently doesn't support creating pools. Use
# Fog Libvirt currently doesn't support creating pools. Use
# ruby-libvirt client directly.
begin
@storage_pool_path = storage_pool_path(env)

View File

@ -3,7 +3,7 @@ require 'log4r'
module VagrantPlugins
module ProviderLibvirt
module Action
# Action for create new box for libvirt provider
# Action for create new box for Libvirt provider
class PackageDomain
def initialize(app, env)
@logger = Log4r::Logger.new('vagrant_libvirt::action::package_domain')
@ -38,7 +38,8 @@ module VagrantPlugins
`qemu-img rebase -p -b "" #{@tmp_img}`
# remove hw association with interface
# working for centos with lvs default disks
`virt-sysprep --no-logfile --operations defaults,-ssh-userdir -a #{@tmp_img}`
operations = ENV.fetch('VAGRANT_LIBVIRT_VIRT_SYSPREP_OPERATIONS', 'defaults,-ssh-userdir')
`virt-sysprep --no-logfile --operations #{operations} -a #{@tmp_img}`
# add any user provided file
extra = ''
@tmp_include = @tmp_dir + '/_include'

View File

@ -79,7 +79,7 @@ module VagrantPlugins
# Check if we can open a connection to the host
def ping(host, timeout = 3)
::Timeout.timeout(timeout) do
s = TCPSocket.new(host, 'echo')
s = TCPSocket.new(host, 'ssh')
s.close
end
true

View File

@ -10,8 +10,8 @@ module VagrantPlugins
end
def call(env)
env[:ui].info('Vagrant-libvirt plugin removed box only from you LOCAL ~/.vagrant/boxes directory')
env[:ui].info('From libvirt storage pool you have to delete image manually(virsh, virt-manager or by any other tool)')
env[:ui].info('Vagrant-libvirt plugin removed box only from your LOCAL ~/.vagrant/boxes directory')
env[:ui].info('From Libvirt storage pool you have to delete image manually(virsh, virt-manager or by any other tool)')
@app.call(env)
end
end

View File

@ -33,7 +33,7 @@ module VagrantPlugins
if @boot_order.count >= 1
# If a domain is initially defined with no box or disk or
# with an explicit boot order, libvirt adds <boot dev="foo">
# with an explicit boot order, Libvirt adds <boot dev="foo">
# This conflicts with an explicit boot_order configuration,
# so we need to remove it from the domain xml and feed it back.
# Also see https://bugzilla.redhat.com/show_bug.cgi?id=1248514
@ -66,7 +66,7 @@ module VagrantPlugins
logger_msg(node, index)
end
# Finally redefine the domain XML through libvirt
# Finally redefine the domain XML through Libvirt
# to apply the boot ordering
env[:machine].provider
.driver

View File

@ -41,7 +41,7 @@ module VagrantPlugins
# parsable and sortable by epoch time
# @example
# development-centos-6-chef-11_1404488971_3b7a569e2fd7c554b852
# @return [String] libvirt domain name
# @return [String] Libvirt domain name
def build_domain_name(env)
config = env[:machine].provider_config
domain_name =
@ -51,7 +51,7 @@ module VagrantPlugins
# don't have any prefix, not even "_"
''
else
config.default_prefix.to_s.dup.concat('_')
config.default_prefix.to_s.dup
end
domain_name << env[:machine].name.to_s
domain_name.gsub!(/[^-a-z0-9_\.]/i, '')

View File

@ -23,7 +23,7 @@ module VagrantPlugins
libvirt_domain = env[:machine].provider.driver.connection.client.lookup_domain_by_uuid(env[:machine].id)
# libvirt API doesn't support modifying memory on NUMA enabled CPUs
# Libvirt API doesn't support modifying memory on NUMA enabled CPUs
# http://libvirt.org/git/?p=libvirt.git;a=commit;h=d174394105cf00ed266bf729ddf461c21637c736
if config.numa_nodes == nil
if config.memory.to_i * 1024 != libvirt_domain.max_memory

View File

@ -28,7 +28,7 @@ module VagrantPlugins
end
# Wait for domain to obtain an ip address. Ip address is searched
# from arp table, either localy or remotely via ssh, if libvirt
# from arp table, either locally or remotely via ssh, if Libvirt
# connection was done via ssh.
env[:ip_address] = nil
@logger.debug("Searching for IP for MAC address: #{domain.mac}")

View File

@ -20,7 +20,7 @@ module VagrantPlugins
end
def usable?(machine, _raise_error = false)
# bail now if not using libvirt since checking version would throw error
# bail now if not using Libvirt since checking version would throw error
return false unless machine.provider_name == :libvirt
# <filesystem/> support in device attach/detach introduced in 1.2.2
@ -30,7 +30,7 @@ module VagrantPlugins
end
def prepare(machine, folders, _opts)
raise Vagrant::Errors::Error('No libvirt connection') if machine.provider.driver.connection.nil?
raise Vagrant::Errors::Error('No Libvirt connection') if machine.provider.driver.connection.nil?
@conn = machine.provider.driver.connection.client
begin
@ -89,7 +89,7 @@ module VagrantPlugins
def cleanup(machine, _opts)
if machine.provider.driver.connection.nil?
raise Vagrant::Errors::Error('No libvirt connection')
raise Vagrant::Errors::Error('No Libvirt connection')
end
@conn = machine.provider.driver.connection.client
begin

View File

@ -20,12 +20,12 @@ module VagrantPlugins
# A hypervisor name to access via Libvirt.
attr_accessor :driver
# The name of the server, where libvirtd is running.
# The name of the server, where Libvirtd is running.
attr_accessor :host
# If use ssh tunnel to connect to Libvirt.
attr_accessor :connect_via_ssh
# Path towards the libvirt socket
# Path towards the Libvirt socket
attr_accessor :socket
# The username to access Libvirt.
@ -42,6 +42,9 @@ module VagrantPlugins
attr_accessor :storage_pool_name
attr_accessor :storage_pool_path
# Libvirt storage pool where the base image snapshot shall be stored
attr_accessor :snapshot_pool_name
# Turn on to prevent hostname conflicts
attr_accessor :random_hostname
@ -55,6 +58,7 @@ module VagrantPlugins
attr_accessor :management_network_autostart
attr_accessor :management_network_pci_bus
attr_accessor :management_network_pci_slot
attr_accessor :management_network_domain
# System connection information
attr_accessor :system_uri
@ -158,7 +162,7 @@ module VagrantPlugins
# Additional qemuargs arguments
attr_accessor :qemu_args
# Use qemu session instead of system
# Use QEMU session instead of system
attr_accessor :qemu_use_session
def initialize
@ -170,6 +174,7 @@ module VagrantPlugins
@password = UNSET_VALUE
@id_ssh_key_file = UNSET_VALUE
@storage_pool_name = UNSET_VALUE
@snapshot_pool_name = UNSET_VALUE
@random_hostname = UNSET_VALUE
@management_network_device = UNSET_VALUE
@management_network_name = UNSET_VALUE
@ -180,6 +185,7 @@ module VagrantPlugins
@management_network_autostart = UNSET_VALUE
@management_network_pci_slot = UNSET_VALUE
@management_network_pci_bus = UNSET_VALUE
@management_network_domain = UNSET_VALUE
# System connection information
@system_uri = UNSET_VALUE
@ -305,7 +311,7 @@ module VagrantPlugins
end
end
# is it better to raise our own error, or let libvirt cause the exception?
# is it better to raise our own error, or let Libvirt cause the exception?
raise 'Only four cdroms may be attached at a time'
end
@ -582,7 +588,7 @@ module VagrantPlugins
# code to generate URI from a config moved out of the connect action
def _generate_uri
# builds the libvirt connection URI from the given driver config
# builds the Libvirt connection URI from the given driver config
# Setup connection uri.
uri = @driver.dup
virt_path = case uri
@ -598,7 +604,7 @@ module VagrantPlugins
raise "Require specify driver #{uri}"
end
if uri == 'kvm'
uri = 'qemu' # use qemu uri for kvm domain type
uri = 'qemu' # use QEMU uri for KVM domain type
end
if @connect_via_ssh
@ -619,13 +625,13 @@ module VagrantPlugins
uri << '?no_verify=1'
if @id_ssh_key_file
# set ssh key for access to libvirt host
# set ssh key for access to Libvirt host
uri << "\&keyfile="
# if no slash, prepend $HOME/.ssh/
@id_ssh_key_file.prepend("#{`echo ${HOME}`.chomp}/.ssh/") if @id_ssh_key_file !~ /\A\//
uri << @id_ssh_key_file
end
# set path to libvirt socket
# set path to Libvirt socket
uri << "\&socket=" + @socket if @socket
uri
end
@ -638,6 +644,7 @@ module VagrantPlugins
@password = nil if @password == UNSET_VALUE
@id_ssh_key_file = 'id_rsa' if @id_ssh_key_file == UNSET_VALUE
@storage_pool_name = 'default' if @storage_pool_name == UNSET_VALUE
@snapshot_pool_name = @storage_pool_name if @snapshot_pool_name == UNSET_VALUE
@storage_pool_path = nil if @storage_pool_path == UNSET_VALUE
@random_hostname = false if @random_hostname == UNSET_VALUE
@management_network_device = 'virbr0' if @management_network_device == UNSET_VALUE
@ -649,6 +656,7 @@ module VagrantPlugins
@management_network_autostart = false if @management_network_autostart == UNSET_VALUE
@management_network_pci_bus = nil if @management_network_pci_bus == UNSET_VALUE
@management_network_pci_slot = nil if @management_network_pci_slot == UNSET_VALUE
@management_network_domain = nil if @management_network_domain == UNSET_VALUE
@system_uri = 'qemu:///system' if @system_uri == UNSET_VALUE
@qemu_use_session = false if @qemu_use_session == UNSET_VALUE

View File

@ -19,11 +19,11 @@ module VagrantPlugins
end
def connection
# If already connected to libvirt, just use it and don't connect
# If already connected to Libvirt, just use it and don't connect
# again.
return @@connection if @@connection
# Get config options for libvirt provider.
# Get config options for Libvirt provider.
config = @machine.provider_config
uri = config.uri
@ -50,7 +50,7 @@ module VagrantPlugins
end
def system_connection
# If already connected to libvirt, just use it and don't connect
# If already connected to Libvirt, just use it and don't connect
# again.
return @@system_connection if @@system_connection

View File

@ -29,10 +29,6 @@ module VagrantPlugins
error_key(:creating_storage_pool_error)
end
class CreatingVolumeError < VagrantLibvirtError
error_key(:creating_volume_error)
end
class ImageUploadError < VagrantLibvirtError
error_key(:image_upload_error)
end
@ -54,7 +50,7 @@ module VagrantPlugins
error_key(:wrong_box_format)
end
# Fog libvirt exceptions
# Fog Libvirt exceptions
class FogError < VagrantLibvirtError
error_key(:fog_error)
end

View File

@ -4,7 +4,7 @@ rescue LoadError
raise 'The Vagrant Libvirt plugin must be run within Vagrant.'
end
# compatibility fix to define constant not available vagrant <1.6
# compatibility fix to define constant not available Vagrant <1.6
::Vagrant::MachineState::NOT_CREATED_ID ||= :not_created
module VagrantPlugins
@ -12,7 +12,7 @@ module VagrantPlugins
class Plugin < Vagrant.plugin('2')
name 'libvirt'
description <<-DESC
Vagrant plugin to manage VMs in libvirt.
Vagrant plugin to manage VMs in Libvirt.
DESC
config('libvirt', :provider) do

View File

@ -1,14 +0,0 @@
<volume>
<name><%= @name %></name>
<allocation unit="<%= split_size_unit(@allocation)[1] %>"><%= split_size_unit(@allocation)[0] %></allocation>
<capacity unit="<%= split_size_unit(@capacity)[1] %>"><%= split_size_unit(@capacity)[0] %></capacity>
<target>
<format type="<%= @format_type %>"/>
<permissions>
<owner><%= @storage_volume_uid %></owner>
<group><%= @storage_volume_gid %></group>
<mode>0744</mode>
<label>virt_image_t</label>
</permissions>
</target>
</volume>

View File

@ -117,7 +117,7 @@
<% if d[:serial] %>
<serial><%= d[:serial] %></serial>
<% end %>
<%# this will get auto generated by libvirt
<%# this will get auto generated by Libvirt
<address type='pci' domain='0x0000' bus='0x00' slot='???' function='0x0'/>
-%>
</disk>

View File

@ -18,6 +18,7 @@ module VagrantPlugins
management_network_autostart = env[:machine].provider_config.management_network_autostart
management_network_pci_bus = env[:machine].provider_config.management_network_pci_bus
management_network_pci_slot = env[:machine].provider_config.management_network_pci_slot
management_network_domain = env[:machine].provider_config.management_network_domain
logger.info "Using #{management_network_name} at #{management_network_address} as the management network #{management_network_mode} is the mode"
begin
@ -65,6 +66,10 @@ module VagrantPlugins
management_network_options[:mac] = management_network_mac
end
unless management_network_domain.nil?
management_network_options[:domain_name] = management_network_domain
end
unless management_network_pci_bus.nil? and management_network_pci_slot.nil?
management_network_options[:bus] = management_network_pci_bus
management_network_options[:slot] = management_network_pci_slot
@ -109,7 +114,7 @@ module VagrantPlugins
networks
end
# Return a list of all (active and inactive) libvirt networks as a list
# Return a list of all (active and inactive) Libvirt networks as a list
# of hashes with their name, network address and status (active or not)
def libvirt_networks(libvirt_client)
libvirt_networks = []

View File

@ -16,7 +16,7 @@ en:
Created volume larger than box defaults, will require manual resizing of
filesystems to utilize.
uploading_volume: |-
Uploading base box image as volume into libvirt storage...
Uploading base box image as volume into Libvirt storage...
creating_domain_volume: |-
Creating image (snapshot of base box volume).
removing_domain_volume: |-
@ -60,7 +60,7 @@ en:
Forwarding UDP ports is not supported. Ignoring.
errors:
package_not_supported: No support for package with libvirt. Create box manually.
package_not_supported: No support for package with Libvirt. Create box manually.
fog_error: |-
There was an error talking to Libvirt. The error message is shown
below:
@ -97,7 +97,7 @@ en:
wrong_box_format: |-
Wrong image format specified for box.
fog_libvirt_connection_error: |-
Error while connecting to libvirt: %{error_message}
Error while connecting to Libvirt: %{error_message}
fog_create_volume_error: |-
Error while creating a storage pool volume: %{error_message}
fog_create_domain_volume_error: |-
@ -108,9 +108,7 @@ en:
Name `%{domain_name}` of domain about to create is already taken. Please try to run
`vagrant up` command again.
creating_storage_pool_error: |-
There was error while creating libvirt storage pool: %{error_message}
creating_volume_error: |-
There was error while creating libvirt volume: %{error_message}
There was error while creating Libvirt storage pool: %{error_message}
image_upload_error: |-
Error while uploading image to storage pool: %{error_message}
no_domain_error: |-

View File

@ -14,7 +14,7 @@ describe VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain do
end
it 'builds simple domain name' do
@env.default_prefix = 'pre'
@env.default_prefix = 'pre_'
dmn = VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain.new(Object.new, @env)
dmn.build_domain_name(@env).should eq('pre_')
end

View File

@ -118,7 +118,12 @@ EOF
echo "==> Creating box, tarring and gzipping"
tar cvzf "$BOX" -S --totals ./metadata.json ./Vagrantfile ./box.img
if type pigz >/dev/null 2>/dev/null; then
GZ="pigz"
else
GZ="gzip"
fi
tar cv -S --totals ./metadata.json ./Vagrantfile ./box.img | $GZ -c > "$BOX"
# if box is in tmpdir move it to CWD before removing tmpdir
if ! isabspath "$BOX"; then

View File

@ -1,6 +1,6 @@
#!/bin/bash +x
# This script should help to prepare RedHat and RedHat like OS (CentOS,
# This script should help to prepare Red Hat and Red Hat-like OS (CentOS,
# Scientific Linux, ...) for Vagrant box usage.
# To create new box image, just install minimal base system in VM on top of not
@ -18,13 +18,13 @@ if [ $# -ne 1 ]; then
fi
# On which version of RedHet are we running?
# On which version of Red Hat are we running?
RHEL_MAJOR_VERSION=$(sed 's/.*release \([0-9]\)\..*/\1/' /etc/redhat-release)
if [ $? -ne 0 ]; then
echo "Is this a RedHat distro?"
echo "Is this a Red Hat distro?"
exit 1
fi
echo "* Found RedHat ${RHEL_MAJOR_VERSION} version."
echo "* Found Red Hat ${RHEL_MAJOR_VERSION} version."
# Setup hostname vagrant-something.

View File

@ -1,51 +1,30 @@
# -*- encoding: utf-8 -*-
# stub: vagrant-libvirt 0.0.42 ruby lib
require File.expand_path('../lib/vagrant-libvirt/version', __FILE__)
Gem::Specification.new do |s|
s.name = "vagrant-libvirt".freeze
s.version = "0.0.45"
s.authors = ['Lukas Stanek','Dima Vasilets','Brian Pitts']
s.email = ['ls@elostech.cz','pronix.service@gmail.com','brian@polibyte.com']
s.license = 'MIT'
s.description = %q{libvirt provider for Vagrant.}
s.summary = %q{libvirt provider for Vagrant.}
s.homepage = 'https://github.com/vagrant-libvirt/vagrant-libvirt'
s.required_rubygems_version = Gem::Requirement.new(">= 0".freeze) if s.respond_to? :required_rubygems_version=
s.require_paths = ["lib".freeze]
s.authors = ["Lukas Stanek".freeze, "Dima Vasilets".freeze, "Brian Pitts".freeze]
s.files = Dir.glob("{lib,locales}/**/*.*") + %w(LICENSE README.md)
s.executables = Dir.glob("bin/*.*").map{ |f| File.basename(f) }
s.test_files = Dir.glob("{test,spec,features}/**/*.*")
s.description = "libvirt provider for Vagrant.".freeze
s.email = ["ls@elostech.cz".freeze, "pronix.service@gmail.com".freeze, "brian@polibyte.com".freeze]
s.homepage = "https://github.com/vagrant-libvirt/vagrant-libvirt".freeze
s.licenses = ["MIT".freeze]
s.rubygems_version = "2.6.14".freeze
s.summary = "libvirt provider for Vagrant.".freeze
s.name = 'vagrant-libvirt'
s.require_paths = ['lib']
s.version = VagrantPlugins::ProviderLibvirt::VERSION
s.installed_by_version = "2.6.14" if s.respond_to? :installed_by_version
s.add_development_dependency "rspec-core", "~> 3.5.0"
s.add_development_dependency "rspec-expectations", "~> 3.5.0"
s.add_development_dependency "rspec-mocks", "~> 3.5.0"
if s.respond_to? :specification_version then
s.specification_version = 4
s.add_runtime_dependency 'fog-libvirt', '>= 0.6.0'
s.add_runtime_dependency 'fog-core', '~> 2.1'
if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
s.add_development_dependency(%q<rspec-core>.freeze, ["~> 3.5.0"])
s.add_development_dependency(%q<rspec-expectations>.freeze, ["~> 3.5.0"])
s.add_development_dependency(%q<rspec-mocks>.freeze, ["~> 3.5.0"])
s.add_runtime_dependency(%q<fog-libvirt>.freeze, [">= 0.3.0"])
s.add_runtime_dependency(%q<nokogiri>.freeze, [">= 1.6.0"])
s.add_runtime_dependency(%q<fog-core>.freeze, ["~> 1.43.0"])
s.add_development_dependency(%q<rake>.freeze, [">= 0"])
else
s.add_dependency(%q<rspec-core>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<rspec-expectations>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<rspec-mocks>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<fog-libvirt>.freeze, [">= 0.3.0"])
s.add_dependency(%q<nokogiri>.freeze, [">= 1.6.0"])
s.add_dependency(%q<fog-core>.freeze, ["~> 1.43.0"])
s.add_dependency(%q<rake>.freeze, [">= 0"])
end
else
s.add_dependency(%q<rspec-core>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<rspec-expectations>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<rspec-mocks>.freeze, ["~> 3.5.0"])
s.add_dependency(%q<fog-libvirt>.freeze, [">= 0.3.0"])
s.add_dependency(%q<nokogiri>.freeze, [">= 1.6.0"])
s.add_dependency(%q<fog-core>.freeze, ["~> 1.43.0"])
s.add_dependency(%q<rake>.freeze, [">= 0"])
end
# Make sure to allow use of the same version as Vagrant by being less specific
s.add_runtime_dependency 'nokogiri', '~> 1.6'
s.add_development_dependency 'rake'
end