We have two Debian Jessie servers which we are going to configure as Xen hosts (domain 0) and implement live migration via DRBD. The main goal of this project is to test Xen live migration.
Xen Project Hypervisor
When we say “Xen” here, we reffer to the Xen Project Hypervisor. Don’t get confused with Citrix’s XenServer.
The Xen Project Hypervisor is an open-source type-1 or baremetal hypervisor. Guest virtual machines running on a Xen Project Hypervisor are known as “domains” (domU) where a special domain known as dom0, also called a domain zero, is responsible for controlling the hypervisor and starting other guest operating systems.
Xen hypervisor supports two primary types of virtualisation: para-virtualisation (PV) and hardware virtual machine (HVM) also known as “full virtualisation”.
PV does not require virtualisation extensions from the host CPU. Paravirtualised operating systems are aware that they are being virtualised and as such don’t require virtual “hardware” devices, instead they make special calls to the hypervisor that allow them to access CPUs, storage and network resources.
In contrast HVM requires Intel VT or AMD-V hardware extensions. HVM guests need not be modified as the hypervisor will create a fully virtual set of hardware devices for this machine that resemble a physical x86 computer. This emulation requires much more overhead than the paravirtualisation approach but allows unmodified guest operating systems like Microsoft Windows to run on top of the hypervisor.
Dom0 Installation and Configuration
We are going to use Debian Jessie as our domain 0 system. We have two servers ready, named xen01 and xen02. The servers have 2 CPU cores and 2GB of RAM each.
Both servers have two network interfaces, eth0
used for DRBD communication (non-routable vlan on 172.16.22.0/24), and eth1
connected to a LAN (10.8.8.0/24).
There is a DHCP/DNS server on the LAN that issues static DHCP leases depending on a client’s MAC address.
The convention followed in the article is that [ALL]# denotes a command that needs to be run on both Xen nodes.
[xen01]# hostnamectl set-hostname xen01.hl.local [xen02]# hostnamectl set-hostname xen02.hl.local
[ALL]# cat /etc/debian_version 8.4
For those who do not have access to (spare) physical hardware to install Xen, VirtualBox guests can be used as dom0.
One thing to note is that VirtualBox does not have the ability to passthrough the VT-x features to a guest, which is necessary to allow nested HVM guests. Therefore only paravirtualised domains can be used as they don’t require virtualisation extensions from the host CPU.
Xen Packages
[ALL]# apt-get update [ALL]# apt-get install xen-linux-system-amd64 bridge-utils
Prefer Grub to boot Xen:
[ALL]# dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen [ALL]# update-grub
Networking
Configure software bridge on both Xen nodes. Our /etc/network/interfaces
content can be seen below. Note the static IP addresses used for DRBD.
source /etc/network/interfaces.d/* auto lo iface lo inet loopback # Leaving allow-hotplug causes the following: # *** A start job is running for LSB: Raise network interfaces. (15min 10s / no limit) #allow-hotplug eth1 iface eth1 inet manual # The default bridge that the toolstack expects is called xenbr0 auto xenbr0 iface xenbr0 inet dhcp bridge_ports eth1 # Other possibly useful options in a virtualised environment bridge_stp off # disable Spanning Tree Protocol bridge_waitport 0 # no delay before a port becomes available bridge_fd 0 # no forwarding delay # DRBD heartbeat vlan auto eth0 iface eth0 inet static address 172.16.22.13 #.13 is xen01, .14 is xen02 netmask 255.255.255.0
Enforce network configuration:
[ALL]# systemctl restart networking
Configure Dom0 Memory and CPU
Open both Xen nodes, open the file /etc/default/grub
and configure the following:
GRUB_TIMEOUT=1
GRUB_DISABLE_LINUX_UUID=true
# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="dom0_mem=1024M,max:1024M dom0_max_vcpus=1 dom0_vcpus_pin"
Remember to apply the change to the grub configuration:
[ALL]# update-grub
Guest Behaviour on Dom0 Reboot
By default, when Xen dom0 shuts down or reboots, it tries to save (read hibernate) the state of the domUs. We want to make sure they get shut down normally.
[ALL]# grep -ve "^#" -ve "^$" /etc/default/xendomains XENDOMAINS_RESTORE=false XENDOMAINS_SAVE= XENDOMAINS_AUTO=/etc/xen/auto XENDOMAINS_STOP_MAXWAIT=300
Toolstack
The xl was introduced in the Xen 4.1 release, however xend remained the default. As of Xen 4.2 however xend/xm are deprecated (and were removed in Xen Project 4.5) and xl should now be used by default.
[ALL]# grep -ve "^#" /etc/default/xen TOOLSTACK=xl
When finished configuring the dom0, reboot the servers:
[ALL]# systemctl reboot
[xen01]# xl info host : xen01.hl.local release : 3.16.0-4-amd64 version : #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) machine : x86_64 nr_cpus : 2 max_cpu_id : 1 nr_nodes : 1 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 2294 hw_caps : 178bfbff:28100800:00000000:00003300:d6d82203:00000000:00000021:00002020 virt_caps : total_memory : 2047 free_memory : 998 sharing_freed_memory : 0 sharing_used_memory : 0 outstanding_claims : 0 free_cpus : 0 xen_major : 4 xen_minor : 4 xen_extra : .1 xen_version : 4.4.1 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : xen_commandline : placeholder dom0_mem=1024M,max:1024M dom0_max_vcpus=1 dom0_vcpus_pin cc_compiler : gcc (Debian 4.9.2-10) 4.9.2 cc_compile_by : jmm cc_compile_domain : debian.org cc_compile_date : Tue Mar 15 22:37:15 CET 2016 xend_config_format : 4
References
https://wiki.debian.org/Xen