GlusterFS and libgfapi problem

jinjer

Renowned Member
Oct 4, 2010
204
7
83
Hi all,I built a cluster of 4 proxmox nodes and shared their storage via glusterfs. I'm hitting a problem when trying to set up a kvm image on top of gluster: Proxmox is able to create the images for the disks on top of it (mount etc) but when KVM starts it cannot write to the disk at all. I followed the normal recommendations for install and the setup is fairly simple:-) bonded gigabit nics for the gluster network-) bonded gigabit nics + bridges for proxmox + kvmThe glusterfs volume is accessible when mounted by itself. I added it as a storage under proxmox gui, type glusterfs, container type: all (images/iso/backup etc).I tried mounting creating the storage as mounted via "localhost" on all nodes and also using the IP address of the bond for gluster. I tried using raw and qcow images. Also tried with Win2k8 and Freebsd OS.The result is the same: At install, when the OS tries to partition the disk, I get write errors.Here's my versions:
Code:
# pveversion  -vproxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve)pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb)pve-kernel-2.6.32-27-pve: 2.6.32-121pve-kernel-2.6.32-26-pve: 2.6.32-114pve-kernel-2.6.32-23-pve: 2.6.32-109lvm2: 2.02.98-pve4clvm: 2.02.98-pve4corosync-pve: 1.4.5-1openais-pve: 1.1.4-3libqb0: 0.11.1-2redhat-cluster-pve: 3.2.0-2resource-agents-pve: 3.9.2-4fence-agents-pve: 4.0.5-1pve-cluster: 3.0-12qemu-server: 3.1-15pve-firmware: 1.1-1libpve-common-perl: 3.0-13libpve-access-control: 3.0-11libpve-storage-perl: 3.0-19pve-libspice-server1: 0.12.4-3vncterm: 1.1-6vzctl: 4.0-1pve4vzprocps: 2.0.11-2vzquota: 3.1-2pve-qemu-kvm: 1.7-4ksm-control-daemon: 1.1-1glusterfs-client: 3.4.2-2
KVM command line is:
Code:
 /usr/bin/kvm -id 5001 -chardev socket,id=qmp,path=/var/run/qemu-server/5001.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/5001.vnc,x509,password -pidfile /var/run/qemu-server/5001.pid -daemonize -name WinSvrTest -smp sockets=1,cores=2 -nodefaults -boot menu=on -vga std -no-hpet -cpu kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2apic,+sep -k en-us -m 4096 -cpuunits 1000 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive file=/mnt/pve/pool/template/iso/win2k8.iso,if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -device ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7 -drive file=gluster://v10g/pool/images/5001/vm-5001-disk-1.qcow2,if=none,id=drive-sata0,format=qcow2,aio=native,cache=none -device ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0 -netdev type=tap,id=net0,ifname=tap5001i0,script=/var/lib/qemu-server/pve-bridge -device e1000,mac=C2:61:19:AF:70:63,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -global kvm-pit.lost_tick_policy=discard
Any ideas? Thank you.jinjer
 
Last edited:
Yes, gluster addresses are in /etc/hosts.I'm using gluster 3.4.1the gluster cluster works properly, there are no issues there.I traced the problem down to the cache=none default parameter for caching that proxmox sets. writeback and writetrough work properly.write-trough is supposed to be safe and fast, so I'll leave it like this. jinjer.
 
I've seen that you are using qcow. Do not know if that is supported by glusterfs.Can anyone confirm ?Or you can make a test with raw...
qcow/raw one level higher than the FS... problem is I'm using ZFS under gluster and that does not like AIO + cache=none. I resorted to use writetrough or writeback for kvm guests and that makes for a good combination.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!