Defcon:Blog Keklkakl blog blag


Xen on Debian Wheezy (with VLAN networking and LVM storage).

TODO: Add an introduction paragraph, or ingress (so having READ MORE makes sense...)

Preparing the operating system

The absolutely first step in getting a Xen capable Debian Wheezy server, is to install Debian Wheezy. Really, there are no special tricks to this “step”, simply install a base system to your liking. I would reccomend NOT installing any X/Desktop environment at all; keep your Xen server a text-based system. Remember to install SSH server, as you'll probably be remote-managing the system. I'll also suggest you install NTP, VIM and Screen as part of the base install. After completing debian-installer:

apt-get install ntp vim screen

The rest of this prep-section is specific to my setup, you may skip down to “Installing Xen” if you like. In my setup, I'm using two RAID sets, one hardware-array with RAID1 for my root filestystem, and one software-array with RAID5, used as a physical volume for LVM. To set up these, the following packages are needed:

apt-get install mdadm lvm2

Next, to create my sfotware-RAID, I used (after a lot of testing to get acceptable IOPS from the disks):

mdadm --create /dev/md0 \
      --verbose \
      --level=5 \
      --chunk=256 \
      --raid-devices=4 /dev/sd{a,b,c,d}

That creates my /dev/md0, as I said, I'm using that as a PV for LVM:

pvcreate /dev/md0
vgcreate sraid5 /dev/md0

Installing Xen

With the basic operating system installed and (lightly) prepared, it is time take a plunge, and install Xen itself.

apt-get install xen-linux-system

After the package is installed, Debian docuemtation suggests a few tweaks. The first reccomendation is to re-prioritize Xen hypervisor vs Linux kernel. If you do not do this, your system will by default _not_ boot Xen, but your “normal” kernel instead. So, to have your system boot to Xen automagically:

dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen

The “undo” for this dpkg-divert is:

dpkg-divert --rename --remove /etc/grub.d/20_linux_xen && update-grub

By default, Xen will allocate as much memory as it can to the controlling system (dom0), and use “balloning” to free up memory from dom0 to use with guests/vm's when memory runs down. If dom0 should ever run low on memory (that may easily happen on a busy Xen host wirh ballooning on dom0) you're in for some interresting behaviour. For a system dedicated to Xen, balooning is discouraged, and the memory should be locked to a sane value. Note that this memory will be locked to the dom0, and not available to guests.

First, start by adding a kernel-option to 'fix' the memory size:

cat <<END >>/etc/default/grub
# Xen boot parameters for all Xen boots

This also needs to be reflected in Xen configuration. Edit /etc/xen/xend-config.sxp and change these two lines:

(dom0-min-mem 196)
(enable-dom0-ballooning yes)

… to this …

(dom0-min-mem 768)
(enable-dom0-ballooning no)

Now is a good time to reboot your system, to bring it up running the Xen hypervisor, with a dom0 with the linux kernel. So, reboot, and watch your system first load Grub2, then load Grub1, then Xen, then Linux kernel, and finally boot into Debian. After your server has completed booting, you can verify that you are in-fact running Xen:

~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   767     4     r-----     20.6

Network configuration

The first thing to know about Xen networking is that all networking for guests (domU's) is done over bridge interfaces. So any interface that should carry network traffic for guests needs to be “overloaded” with a bridge. You should also know that Xen can handle networking for guests somewhat automagically. If a Xen domU configuration contains a blank “vif” definition, Xen (or rather Xend) will automagically allocate a Mac-address, and create a virtual interface connected to the first bridge it can find on the system. My experiments show that the Xen and Debian-Xen documentation is incorrect, claiming that the bridge “xenbr0” will be used if it exists. What I've experienced is that the alphabetically sorted first bridge on the system will be used for automatic assignment.

To read up on network bridging on Debian, the official documentation is found two places, at and as 'man 5 bridge-utils-interfaces'.

Let's start with the really simple Xen network setup: when you have one network interface (let's call that eth0), you want to use that interface for Xen guests as well as host communication, and you use static IP addressing. In the example, please note that the configuration for eth0 is commmented out, it is included simply for reference so you can compare the “before” and “after”.

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
# IS NO LONGER IN USE, see xenbr0
#allow-hotplug eth0
#iface eth0 inet static
#	address
#	netmask
#	gateway

auto xenbr0
iface xenbr0 inet static
        bridge_ports eth0

Yes. That is all that is needed for the simplest Xen networking setup. Xen will automatically create Virtual Interfaces (vif's) for guests, and attach to this bridge. Before going on to my more involved setup, you may want to read more about how this “magic” happens at

I'm running a network setup for Xen that is a bit more complex than the standard “one ethernet interface for everything” that seems to be the norm for smaller Xen configurations. In my setup, I have one network interface that I use only for managing the Xen server and communication with dom0, a setup for local-only communication between guests, and one network interface that talks to several VLANs (Virtual LANs) using 802.1q tagging. I have a separate text talking about VLAN networking on Debian, so for details regarding just that part, see the VLAN article before proceeding.

My /etc/network/interfaces is fairly verbosely commented, so I'm including it directly, and letting it speak for it self.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface, used for management and in the future, iSCSI
# In my setup I'm actually using DHCP for address, with the address locked down
# on the DHCP-server side..
allow-hotplug eth0
iface eth0 inet dhcp

# Create a dummy interface to use with a "local-only" 
# bridge to use for domU-to-domU pure local traffic
auto dummy0
iface dummy0 inet manual
         pre-up ifconfig $IFACE up
         post-down ifconfig $IFACE down

# Bring up eth1, but with no address configuration
# eth1 will be used later as a bridge-port interface.
# It is also connected to a switchport that does VLAN
# trunking with native (untagged) VLAN 2 and VLANs 3,4 and 5 tagged.
auto eth1
iface eth1 inet manual
	pre-up ifconfig $IFACE up
	post-down ifconfig $IFACE down

# Creating VLANs has become a lot better in Wheezy.
# All that is needed to create a VLAN subinterface now,
# is to add a stanza for a ethX.Y subinterface. Y becomes
# the VLAN ID.

# Adding VLANs tagged on eth1. These are simply
# brought up, they will be added as bridge-ports later.

auto eth1.3
iface eth1.3 inet manual
	pre-up ifconfig $IFACE up
	post-down ifconfig $IFACE down

auto eth1.4
iface eth1.4 inet manual
	pre-up ifconfig $IFACE up
	post-down ifconfig $IFACE down

auto eth1.5
iface eth1.5 inet manual
	pre-up ifconfig $IFACE up
	post-down ifconfig $IFACE down

# Add the bridge for pure local-only communication between
# domU's. Just for fun, I'm setting an IP on this interface..
# The name of this interface is chosen to be both descriptive,
# and at the same time be sorted last, so that Xend does not choose
# this bridge as the default for automatic vif creation.
auto xenlocal
iface xenlocal inet static
        bridge_ports dummy0
	bridge_stp off

# None of the interfaces below have IP addresses assigned. Any domU hosts using
# these are not supposed to use the bridge for direct communication with dom0.

# The configuration for the VLANs is very repetitive: An 'auto' line to make
# sure the interface comes up automagically, an 'iface' line setting the bridge
# name and defining it to 'manual' configuration, definition of the VLAN
# subinterface of eth1, and finally setting Spanning Tree to 'on'.
# If even more control over the bridge interfaces is needed,
# read 'man 5 bridge-utils-interfaces'

# For the VLANs, I used 'vlanX' as bridge-name, as this is far
# more descriptive in the domU-configs than "xenbr143"
# NOTE: In my setup, I want guest systems to start up in VLAN2 by default.
# With the naming I've used for the interfaces, this interface is sorted
# first, so the magic works.

# eth1 is used as physical-interface for both untagged and for tagged VLANs,
# please review the separate eth1 and eth1.X definitions above, and separate
# vlanX bridges below. 

auto vlan2
iface vlan2 inet manual
        bridge_ports eth1
        bridge_stp on
	bridge_maxwait 0

auto vlan3
iface vlan3 inet manual
	bridge_ports eth1.3
	bridge_stp on
	bridge_maxwait 0

auto vlan4
iface vlan4 inet manual
	bridge_ports eth1.4
	bridge_stp on
	bridge_maxwait 0

auto vlan5
iface vlan5 inet manual
	bridge_ports eth1.5
	bridge_stp on
	bridge_maxwait 0

Creating the first virtual machine (domU)

For a first domU to test, it's a good idea to keep things simple. One of the simpler scenarios is to use a disk-image as storage, and use the Xen installation configuration provided by Debian for Debian Squeeze (at the moment, squeeze is 'stable', wheezy is 'testing').

First up, we need to create a file to use as the virtual disk. You can create the file in many ways, my preferred method is to use the qemu-img tool (it's quick and the syntax is simple).

mkdir /srv/xen
qemu-img create -f raw /srv/xen/sqeeze-first.img 14G

The debian-installer project provides pre-made Xen/xm configurations for installing and running Debian. Fetch one of the netboot configurations for squeeze (amd64 or i386). You can save the configuration file where ever you like, but it may be a good idea to plan for production use already.

mkdir -p /etc/xen/guests/
wget -O /etc/xen/guests/squeeze-xm-debian-adm64.cfg \
cp /etc/xen/guests/squeeze-xm-debian-adm64.cfg /etc/xen/guests/squeeze-first.cfg

This creates a directory to hold Xen domU configurations, fetches the debian-installer-provided Xen configuration for Debian Squeeze amd64 and saves it to the newly created /etc/xen/guests/ directory under a more describing name. Finally, a copy of this file is made, keeping the original as a template.

The copied file needs to be modified in two locations. The domain name needs to be changed, and the path to our created disk image needs to be specified as the disk location. You can do this in two ways. You can edit the file, and look for the two configurations “name = ” and “disk =”. Or you can use the following sed:

sed -i /etc/xen/guests/squeeze-first.cfg \
    -e "s/ExampleDomain/squeeze-first/" \
    -e "s|/path/to/disk.img|/srv/xen/sqeeze-first.img|"

To start the installer using the Debian-provided configuration, use the following invocation:

xm create /etc/xen/guests/squeeze-first.cfg install=true install-method=network

This will first download the kernel and initrd required to do a netinstall, then start a new domU running the installer. After you recieve the message “Started domain squeeze-first”, connect to it to continue the install:

xm console squeeze-first

You should now see a text-based black-and-white version of the Debian Installer. Install the OS as you normally would. Note that we have done no modifications to the domU config except name and disk location. This means that the VM will not have a framebuffer available. So make sure to not install any Desktop environment…

After the installation completes, the VM should shut down, and drop you back at your Xen server's prompt. To bring the VM back up, running the installed OS, repeat “xm create”, but without the “install*” arguments, and connect to the local console using the same “xm console” as before:

xm create /etc/xen/guests/squeeze-first.cfg
xm console squeeze-first

To get back out of the console, press CTRL-5.

If the VM you created should actually be used, and not be a simple demo/test, you should now modify the configuration file to lock the network interface MAC address. If you do not do this, a new MAC address will be generated on each boot. Figure out the MAC address (e.g. by connecting the console and using ifconfig), note it, and open up the configuration. Look for the line that starts with “vif = ”. Change that line to one of:

vif = ['mac=00:16:3e:00:00:11']
vif = ['mac=00:16:3e:00:00:11, bridge=xenbr0']
vif = ['mac=00:16:3e:00:00:11, bridge=vlan2']

Naturally, you'll need to replace the MAC here with the one you got from your VM. The first example will use the given MAC, and autoselect bridge (typically xenbr0). The second form explicitly selects the bridge to use. The third simply shows that the bridge does not need to be called “xenbrX”.

A method for getting the MAC-address of a VM, without actually logging in on the VM, is to look it up in xenstore. This can be done during installation, and that gives you the option of locking down the MAC before the installation completes. Replace DOMAINNAME with the name of the domU, ex squeeze-first

xenstore-ls /local/domain/$(xm domid DOMAINNAME)/device/vif/0 | grep "mac ="

Creating a squeeze domU with LVM storage

For the second VM/guest/domU, we are going to do a couple of things differently. First of all, an LVM volume will be used as the domU disk. Secondly, we'll skip using the “debian-provided” configuration file, and create one ourselves.

Starting off, we'll create the LVM volume. I explained my software-raid and LVM initialization previously, so I assume you are up to date on that. To understand the commands below, remember that my volume group is called “sraid5”. To create the volume for the new domU we're creating, simply use “lvcreate” as you prefer. One example may be:

lvcreate sraid5 -n squeezelvm -L 14G

With that done, we need to prepare for OS installation before writing the domU configuration. With the “debian-provided” configuration, kernel and initrd will be pulled from the 'net for us. This time, we want to download these files manually, and tell Xen to boot our local files. I prefer keeping boot-files in a structure that allows me to keep multiple distributions, versions and architectures available. Keeping the structure up-to-date can be tedious, so I've scripted the process:

mkdir -p /srv/xen/boot/debian/{squeeze,wheezy}/{amd64,i386}
for DIST in squeeze wheezy
  for ARCH in i386 amd64; 
    for FILE in vmlinuz initrd.gz
      wget ${DEBINSTROOT}/${DIST}/main/installer-${ARCH}/current/images/netboot/xen/${FILE} \
           -nc -O /srv/xen/boot/${DIST}/${ARCH}/${FILE}

This creates the following structure (under /srv/xen):

└── debian
    ├── squeeze
    │   ├── amd64
    │   │   ├── initrd.gz
    │   │   └── vmlinuz
    │   └── i386
    │       ├── initrd.gz
    │       └── vmlinuz
    └── wheezy
        ├── amd64
        │   ├── initrd.gz
        │   └── vmlinuz
        └── i386
            ├── initrd.gz
            └── vmlinuz

Whew! With that out of the way, you can proceed to creating a minimal configuration for the new VM. This configuration is first set up to do installation. Save this to /etc/xen/guests/squeeze-lvm.cfg:

name = "squeeze-lvm"
memory = 128
vif = ['']
disk = ['phy:sraid5/squeezelvm,xvda,w']

kernel = "/srv/xen/boot/debian/squeeze/amd64/vmlinuz"
ramdisk = "/srv/xen/boot/debian/squeeze/amd64/initrd.gz"
extra = "debian-installer/exit/always_halt=true -- console=hvc0"
on_crash = 'destroy'
on_reboot = 'destroy'

Here, the domU name (domain name, guest name) is set to “squeeze-lvm”, the VM is given 128MB of RAM, told to autogenerate networking, and to use a physical disk device, with volgroup/volname, attached as xvda in read-write mode. After taht, the domU is set to load up the installation initrd and kernel, and to stop when the installer exits or the VM reboots.

Great! Start the installation by creating the domU instance:

xm create /etc/xen/guests/squeeze-lvm.cfg
xm console squeeze-lvm

After the installation is complete, the configuration needs to be modified. Most importantly, a change from direct kernel loading to a bootloader is needed. The behaviour of rebooting should also be set to actually reboot, not halt. So, edit the /etc/xen/guests/squeeze-lvm.cfg file:

name = "squeeze-lvm"
memory = 128
vif = ['']
disk = ['phy:sraid5/squeezelvm,xvda,w']

on_reboot = 'restart'
on_crash = 'destroy'

Start up the installed system like previously:

xm create /etc/xen/guests/squeeze-lvm.cfg
xm console squeeze-lvm

Now would be a good time to edit the configuration again, and “lock down” the MAC-address and bridge name. Other than that, you are basically done creating this guest OS.

Mounting an LVM partition or volume that contains partitions

Sometimes it is useful, or even needed, to open up a volume outside the guest OS. Scenarios may be that you have b0rked the boot proccess or in some other way rendered your system unusable. To do this, two major things are needed:

  • The guest/domU must be shut down
  • A tool for accessing partitions inside an LVM volume.

The first can be accomplished from inside the OS, or if your domU is properly broken, by using “xm shutdown <name|id>” or “xm destroy <name|id>”.

When it comes to tools, several are available. The approach I liked most was using “kpartx”, originally part of “multipath-tools”.

apt-get install kpartx

To get access to the partitions of the LVM volume, have kpartx examine the volume, and create device-mapper nodes for the contents:

~# kpartx -a /dev/sraid5/squeezelvm
~# ls -l /dev/mapper/sraid5-squeezelvm*
lrwxrwxrwx 1 root root 7 Jun 22 16:36 /dev/mapper/sraid5-squeezelvm -> ../dm-0
lrwxrwxrwx 1 root root 7 Jun 22 18:22 /dev/mapper/sraid5-squeezelvm1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Jun 22 18:22 /dev/mapper/sraid5-squeezelvm2 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jun 22 18:22 /dev/mapper/sraid5-squeezelvm5 -> ../dm-3

As you see, all partitions created during the Debian install are now available through /dev/mapper/… Accessing the content is now a simple mount:

~# mount /dev/mapper/sraid5-squeezelvm1 /mnt
~# ls /mnt
bin  boot  dev  etc  home  initrd.img  ....

After you are done poking around the volume, make sure to umount and clean up. If you do not properly clean up with kpartx, your LVM volume will be marked as “in use”, and unavailable to your VM.

~# umount /mnt/
~# kpartx -d /dev/sraid5/squeezelvm

You can also use kpartx to access partitions and subvolumes inside disk-images, and even LVM volumes created inside LVM volumes. Have a look at this blog-entry for some ideas:

Note that “Rich” has a comment on that page reccomending “guestfish”. libguestfs/guestfish are probably good tools, but I wanted something that did not rely on FUSE. Also note that “Rich” seems to be the maintainer of libguestfs…

Creating a persistant domU using an LVM volume as a template

Doing this involves creating an LVM snapshot, and creating a domU configuration that uses the snapshot as its root disk. I'm not going to go into great detail here, simply because I've experienced how bad a Xen duest can crash if an LVM snapshot is used as its disk, and the snapshot volume runs out ov space. So, I'm not going to document how to do this, but send you off to Linuxtopia's chapter from their Xen 3.0 Virtualization User Guide 6.3 Using LVM-backed VBDs where you can read both how to do it, and some words of caution.

References and pointers.

Comments (0) Trackbacks (0)

No comments yet.

Leave a comment

Trackbacks are disabled.