Hello,
We use a Ceph RBD cluster for our storage with Proxmox. Everything works fine, except when I make a snapshot of a running VM (I select the VM, tab 'Snapshot', choose 'Take Snapshot', give it a name and check 'Include RAM' and hit the button 'Take Snapshot'). The problem is that the VM will be unavailable during the snapshot, however the VM will not be restarted or someting (keeps uptime, also in guest OS).
On a Proxmox cluster of a customer (with glusterfs storage and cqow2 images) the VM will lose 1, max 2 icmp packets which makes the downtime almost imperceptible. On our Ceph RBD storage it will lose a minimum of 7 icmp packets during the snapshot. Is this a known issue or limitation of Ceph's RBD? We don't manage Ceph using Proxmox (we setup the Ceph cluster ourself and added it in Proxmox using the 'Storage' tab under 'Datacenter' in Proxmox, not via the host and then 'Ceph' tab in Proxmox).
Currently installed versions:
Edit: Bug in Ceph. Solved in 0.94.6.
We use a Ceph RBD cluster for our storage with Proxmox. Everything works fine, except when I make a snapshot of a running VM (I select the VM, tab 'Snapshot', choose 'Take Snapshot', give it a name and check 'Include RAM' and hit the button 'Take Snapshot'). The problem is that the VM will be unavailable during the snapshot, however the VM will not be restarted or someting (keeps uptime, also in guest OS).
On a Proxmox cluster of a customer (with glusterfs storage and cqow2 images) the VM will lose 1, max 2 icmp packets which makes the downtime almost imperceptible. On our Ceph RBD storage it will lose a minimum of 7 icmp packets during the snapshot. Is this a known issue or limitation of Ceph's RBD? We don't manage Ceph using Proxmox (we setup the Ceph cluster ourself and added it in Proxmox using the 'Storage' tab under 'Datacenter' in Proxmox, not via the host and then 'Ceph' tab in Proxmox).
Currently installed versions:
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Edit: Bug in Ceph. Solved in 0.94.6.
Last edited: