Hello,
I have a 3 node cluster running ceph. I have a ceph monitor and 6 OSDs on each node. Each OSD journal is mapped to an NVMe partition on a separate disk. Each node has a dedicated 10G nic for ceph public network and a dedicated nic for ceph cluster network.
On the same LAN as the ceph public lives a separate NFS server.
All of my VMs have their disks stored in the ceph pool. I want to backup my VMs to the NFS server. When the backup job runs, I get about 30MBps with or without compression. I have tried doing a snapshot and a stopped VM backup, no difference. As I have peeled back the onion to find out where the issue is, it looks like it is VZDump.
If I run "rbd export ceph/vm-100-disk-1 /mnt/pve/nfsserver/vm-100-disk-1.raw" I get transfer speeds of 395MBps. This rules out ceph or NFS or the network as being an issue.
Any idea what could be causing VZDump to be so much slower than the rbd export?
I found this thread earlier but wasn't sure if it was related because the link that provided in the thread about patches mentions that the patches increased ceph export speeds. I don't have issues with ceph export speeds, except in VZDump, if VZDump actually does a ceph export.
Thanks!
Current running versions:
# pveversion -v
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
I have a 3 node cluster running ceph. I have a ceph monitor and 6 OSDs on each node. Each OSD journal is mapped to an NVMe partition on a separate disk. Each node has a dedicated 10G nic for ceph public network and a dedicated nic for ceph cluster network.
On the same LAN as the ceph public lives a separate NFS server.
All of my VMs have their disks stored in the ceph pool. I want to backup my VMs to the NFS server. When the backup job runs, I get about 30MBps with or without compression. I have tried doing a snapshot and a stopped VM backup, no difference. As I have peeled back the onion to find out where the issue is, it looks like it is VZDump.
If I run "rbd export ceph/vm-100-disk-1 /mnt/pve/nfsserver/vm-100-disk-1.raw" I get transfer speeds of 395MBps. This rules out ceph or NFS or the network as being an issue.
Any idea what could be causing VZDump to be so much slower than the rbd export?
I found this thread earlier but wasn't sure if it was related because the link that provided in the thread about patches mentions that the patches increased ceph export speeds. I don't have issues with ceph export speeds, except in VZDump, if VZDump actually does a ceph export.
Thanks!
Current running versions:
# pveversion -v
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie