Hello,
I have a standalone proxmox node with with several VMs on ZFS volume and it is working fine.
Now, on a separate two node test cluster, I created 2 ZFS datasets. These 2 datasets will be replicated through GlusterFS on both nodes (2).Each dataset will store a separate VM which will be continously snapshoted through ZFS for backup.
I have configured zfs datasets with the following options:
compression=off
xattr=sa
sync=disabled
GlusterFS volume is configured with default settings.
Below is the ZFS configuration:
Gluster config is as follows:
Mount points:
Everything looks ok.I have also configured Gluster on Proxmox -> Datacenter -> Storage.
But when I try to create a VM on this storage, I get the following error:
PVE-Version:
If I dd or create anything via cli there is no problem.I can see that these files are replicated on both nodes.
However if I try to create a qcow2 disk with qemu-img I get the same error.
I tried the same scenario without ZFS but with hdd -> ext4+glusterfs combination and there is no problem there.
Must be something in ZFS+GlusterFS combination.
Anyone with such experience?
thanks
I have a standalone proxmox node with with several VMs on ZFS volume and it is working fine.
Now, on a separate two node test cluster, I created 2 ZFS datasets. These 2 datasets will be replicated through GlusterFS on both nodes (2).Each dataset will store a separate VM which will be continously snapshoted through ZFS for backup.
I have configured zfs datasets with the following options:
compression=off
xattr=sa
sync=disabled
GlusterFS volume is configured with default settings.
Below is the ZFS configuration:
Code:
zpool status
pool: zfspool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfspool1 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
errors: No known data errors
Code:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool1 26.4G 12.7G 11.1G /zfspool1
zfspool1/images 15.3G 12.7G 37K /zfspool1/images
zfspool1/images/101 10.9G 12.7G 10.9G /zfspool1/images/101
zfspool1/images/102 4.35G 12.7G 424K /zfspool1/images/102
Gluster config is as follows:
Code:
Status of volume: glusterzfs
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/zfspool1 49155 Y 3057
Brick gluster2:/zfspool1 49155 Y 2949
NFS Server on localhost 2049 Y 3064
Self-heal Daemon on localhost N/A Y 3068
NFS Server on gluster2 2049 Y 3259
Self-heal Daemon on gluster2 N/A Y 3264
There are no active volume tasks
Code:
Volume Name: glusterzfs
Type: Replicate
Volume ID: 976e38ae-bd13-4adf-929a-b63bbb6ec248
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/zfspool1
Brick2: gluster2:/zfspool1
Options Reconfigured:
cluster.server-quorum-ratio: 55%
Mount points:
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 299M 468K 298M 1% /run
/dev/mapper/pve-root 2.5G 2.0G 427M 83% /
tmpfs 5.0M 0 5.0M 0% /run/lock
zfspool1 24G 12G 13G 47% /zfspool1
zfspool1/images 13G 128K 13G 1% /zfspool1/images
zfspool1/images/101 24G 11G 13G 47% /zfspool1/images/101
zfspool1/images/102 13G 512K 13G 1% /zfspool1/images/102
tmpfs 597M 50M 548M 9% /run/shm
/dev/mapper/pve-data 4.5G 138M 4.3G 4% /var/lib/vz
/dev/sda1 495M 65M 406M 14% /boot
/dev/fuse 30M 24K 30M 1% /etc/pve
172.21.3.3:/storage/nfs2 727G 28G 700G 4% /mnt/pve/nfs1
gluster:glusterzfs 24G 12G 13G 47% /mnt/pve/glusterzfs
Everything looks ok.I have also configured Gluster on Proxmox -> Datacenter -> Storage.
But when I try to create a VM on this storage, I get the following error:
Code:
unable to create image: qemu-img: gluster://gluster/glusterzfs/images/102/vm-102-disk-4.qcow2: Could not read qcow2 header: Operation not permitted (500)
PVE-Version:
Code:
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
If I dd or create anything via cli there is no problem.I can see that these files are replicated on both nodes.
However if I try to create a qcow2 disk with qemu-img I get the same error.
I tried the same scenario without ZFS but with hdd -> ext4+glusterfs combination and there is no problem there.
Must be something in ZFS+GlusterFS combination.
Anyone with such experience?
thanks