OpenVZ shutdown, xfs and bind mounts oddness

wrichter

New Member
Jan 7, 2010
8
0
1
Hi, I have an odd problem regarding the "shutdown" functionality of openvz containers in proxmox ve ("stop" is working fine).

I'm on an bare metal install of 1.4:

Code:
proxmox:~# pveversion --verbose
pve-manager: 1.4-10 (pve-manager/1.4/4403)
qemu-server: 1.1-8
pve-kernel: 2.6.24-18
pve-qemu-kvm: 0.11.0-2
pve-firmware: 1
vncterm: 0.9-2
vzctl: 3.0.23-1pve3
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
proxmox:~# uname -a
Linux proxmox 2.6.24-9-pve #1 SMP PREEMPT Tue Nov 17 09:34:41 CET 2009 x86_64 GNU/Linux
I have amended my HN LVM configuration to point to another sw raid1:

Code:
proxmox:~# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/md0   raid1data lvm2 a-     1.36T 1.17T
  /dev/sda2  pve       lvm2 a-   118.74G 3.99G
proxmox:~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  pve         1   3   0 wz--n- 118.74G 3.99G
  raid1data   1   2   0 wz--n-   1.36T 1.17T
proxmox:~# lvs
  LV     VG        Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  data   pve       -wi-ao  81.00G
  root   pve       -wi-ao  29.75G
  swap   pve       -wi-ao   4.00G
  test   raid1data -wi-a- 100.00G
  videos raid1data -wi-ao 100.00G
And I've created an xfs file system on /dev/raid1/videos which is mounted to /mnt/videos on the HN. To share this fs with a container, I've created bind mount scripts as described in the openvz wiki.

Code:
proxmox:~# cat /etc/vz/conf/102.mount
#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
mount --bind /mnt/videos ${VE_ROOT}/mnt/videos

proxmox:~# cat /etc/vz/conf/102.umount
#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
umount ${VE_ROOT}/mnt/videos || exit 0
The container (based on debian lenny) starts up fine, the bind mount shows the expected content in the containers /mnt/videos.

Code:
proxmox:~# mount
/dev/pve/root on / type ext3 (rw,noatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,noatime)
/dev/sda1 on /boot type ext3 (rw)
/dev/mapper/raid1data-videos on /mnt/videos type xfs (rw,noatime)
/mnt/videos on /var/lib/vz/root/102/mnt/videos type none (rw,bind)
proxmox:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/pve/root          30G  1.6G   27G   6% /
tmpfs                 2.0G     0  2.0G   0% /lib/init/rw
udev                   10M  2.7M  7.4M  27% /dev
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/mapper/pve-data   80G  2.6G   78G   4% /var/lib/vz
/dev/sda1             504M   50M  430M  11% /boot
/dev/mapper/raid1data-videos
                      100G   42M  100G   1% /mnt/videos
When I stop the container using the web UI, everything works as expected. However when I use shutdown using the web UI, something odd happens - the /mnt/videos in the HN is actually unmounted, even though the mount command still lists it as mounted. However the contents of the unmounted /mnt/videos directory are visible in the HN, and df reports the same data as for the / fs for /mnt/videos:

Code:
proxmox:~# mount
/dev/pve/root on / type ext3 (rw,noatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,noatime)
/dev/sda1 on /boot type ext3 (rw)
/dev/mapper/raid1data-videos on /mnt/videos type xfs (rw,noatime)
proxmox:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/pve/root          30G  1.6G   27G   6% /
tmpfs                 2.0G     0  2.0G   0% /lib/init/rw
udev                   10M  2.7M  7.4M  27% /dev
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/mapper/pve-data   80G  2.6G   78G   4% /var/lib/vz
/dev/sda1             504M   50M  430M  11% /boot
/dev/mapper/raid1data-videos
                       30G  1.6G   27G   6% /mnt/videos
Issuing a umount command causes error messages to be reported but seems to clean up the problem:

Code:
proxmox:~# umount /mnt/videos
umount: /dev/mapper/raid1data-videos: not mounted
umount: /dev/mapper/raid1data-videos: not mounted
proxmox:~# mount
/dev/pve/root on / type ext3 (rw,noatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,noatime)
/dev/sda1 on /boot type ext3 (rw)
I was not able to replicate this problem outside the OpenVZ environment, i.e. performing a bind mount on the HN, unmounting the bind mount several times, etc.

Any idea what's going on here? I assume the "shutdown" button causes the container to switch to runlevel 0, where it executes the S40umountfs script which in turn issues a "umount -f -r -d /mnt/videos". Then on the HN, the umount script (tries to) unmount the bind mount of /mnt/videos on /var/lib/vz/root/102/mnt/videos. However there must be some special ingredient wrt. the shutdown functionality, since I was not able to replicate the behaviour by issueing both commands by hand in the respective container/HN.

Of course, the desired behaviour would be for the HN's /mnt/videos file system not to be unmounted at all :-)
 
Last edited:
What script do you use to bind mount? Do you use an <VMID>.umount script? If so, try without.
 
What script do you use to bind mount? Do you use an <VMID>.umount script? If so, try without.

Hi Dietmar,

thanks for the quick response! I use the following <VMID>.mount script:

Code:
proxmox:~# cat /etc/vz/conf/102.mount
#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
mount --bind /mnt/videos ${VE_ROOT}/mnt/videos
And as suggested in the OpenVZ Wiki I also use an umount script:

Code:
proxmox:~# cat /etc/vz/conf/102.umount
#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
umount ${VE_ROOT}/mnt/videos || exit 0

When I remove the umount script, the problem does not occur. Is this the recommended solution or just a workaround covering another problem (since the OpenVZ wiki states that "But you'd better have [the umount script] anyway")?

What's the difference between stop and shutdown anyway? Is it one killing all processes immediately vs. a coordinated shutdown via runlevel 0?
 
When I remove the umount script, the problem does not occur.

I never use an umount script.

What's the difference between stop and shutdown anyway? Is it one killing all processes immediately vs. a coordinated shutdown via runlevel 0?

see 'man vzctl'
 
Oh, sorry - it is 'stop' vs. 'stop --fast'

Great - thanks. The man page doesn't mention '--fast' :rolleyes: but the openvz wiki does - for other reader's reference:

vzctl has a two-minute timeout for the CT shutdown scripts to be executed. If the CT is not stopped in two minutes, the system forcibly kills all the processes in the Container. The Container will be stopped in any case, even if it is seriously damaged. To avoid waiting for two minutes in case of a Container that is known to be corrupt, you may use the --fast
switch

[...]


Make sure that you do not use the --fast switch with healthy CTs, unless necessary, as the forcible killing of CT processes may be potentially dangerous.