read-write sharing, export the directory to other domains via NFS from
domain0 (or use a cluster file system such as GFS or ocfs2).
5.2 Using File-backed VBDs
It is also possible to use a file in Domain 0 as the primary storage for a virtual machine.
As well as being convenient, this also has the advantage that the virtual block device
will be sparse — space will only really be allocated as parts of the file are used. So
if a virtual machine uses only half of its disk space then the file really takes up half of
the size allocated.
For example, to create a 2GB sparse file-backed virtual block device (actually only
consumes 1KB of disk):
# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1
Make a file system in the disk file:
# mkfs -t ext3 vm1disk
(when the tool asks for confirmation, answer ‘y’)
Populate the file system e.g. by copying from the current root:
# mount -o loop vm1disk /mnt
# cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
# mkdir /mnt/{proc,sys,home,tmp}
Tailor the file system by editing /etc/fstab, /etc/hostname, etc (don’t forget
to edit the files in the mounted file system, instead of your domain 0 filesystem, e.g.
you would edit /mnt/etc/fstab instead of /etc/fstab ). For this example put
/dev/sda1 to root in fstab.
Now unmount (this is important!):
# umount /mnt
In the configuration file set:
disk = [’file:/full/path/to/vm1disk,sda1,w’]
As the virtual machine writes to its ‘disk’, the sparse file will be filled in and consume
more space up to the original 2GB.
Note that file-backed VBDs may not be appropriate for backing I/O-intensive
domains. File-backed VBDs are known to experience substantial slowdowns under
heavy I/O workloads, due to the I/O handling by the loopback block device used to
support file-backed VBDs in dom0. Better I/O performance can be achieved by using
either LVM-backed VBDs (Section 5.3) or physical devices as VBDs (Section 5.1).
Linux supports a maximum of eight file-backed VBDs across all domains by default.
22
Commentaires sur ces manuels