Snapshot storage for a VM on CEPH

dpearceFL

Well-Known Member
Jun 1, 2020
125
10
58
66
Suppose I create a VM on CEPH storage (RAW) and I snapshot the VM, where is the snapshot stored? Is it even a separate file?
 
Snapshots are in integral feature of a Ceph block device. It is not a separate file in the classic meaning of "file", you can not find them via "ls" on a mounted filesystem. You can see (and also create/destroy) them on the CLI:
Code:
~# rbd snap ls ceph1/vm-1184-disk-0
SNAPID  NAME                 SIZE    PROTECTED  TIMESTAMP               
105249  auto-d-241031080716  18 GiB             Thu Oct 31 08:07:17 2024
105697  auto-d-241101080651  18 GiB             Fri Nov  1 08:06:52 2024
105865  auto-h-241101130949  18 GiB             Fri Nov  1 13:09:50 2024
105893  auto-h-241101140934  18 GiB             Fri Nov  1 14:09:35 2024
105921  auto-h-241101150919  18 GiB             Fri Nov  1 15:09:20 2024

See also: https://docs.ceph.com/en/reef/rbd/rbd-snapshot/
 
Hi
we are running ceph rbd (squid 19.2.0) in a cluster setup with 5 nodes ( pve-8.3.3)
Although when testing the snapshot through CLI, we could not do a rollback. While it seems to work with via gui.


root@pve1-me:~# ceph version
ceph version 19.2.0 (3815e3391b18c593539df6fa952c9f45c37ee4d0) squid (stable)

root@pve1-me:~# pveversion
pve-manager/8.3.3/f157a38b211595d6 (running kernel: 6.8.12-6-pve)

root@pve1-me:~# rbd snap create pmoxpool01/vm-100-disk-0@snap_20250130_1
Creating snap: 100% complete...done.

root@pve1-me:~# rbd ls -l -p pmoxpool01
NAME SIZE PARENT FMT PROT LOCK
vm-100-disk-0 30 GiB 2 excl
vm-100-disk-0@snap_20250130 30 GiB 2
vm-100-disk-0@snap_20250130_1 30 GiB 2

root@pve1-me:~# rbd snap rollback pmoxpool01/vm-100-disk-0@snap_20250130_1
Rolling back to snapshot: 0% complete...failed.
rbd: rollback failed: (30) Read-only file system


The snapshot via CLI does not appear in '/etc/pve/qemu-server/100.conf'

root@pve1-me:~# cat /etc/pve/qemu-server/100.conf
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=9.0.2,ctime=1738159040
name: tommie
net0: virtio=BC:24:11:32:51:C6,bridge=vmbr0v249
numa: 0
ostype: l26
parent: snap_20250130
scsi0: pmoxpool01:vm-100-disk-0,discard=on,mbps_wr=20,size=30G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=75333f37-e91a-4894-afad-66a6a47460bc
sockets: 1
unused0: VM-VMware:100/vm-100-disk-0.vmdk
vmgenid: a9ebfdb6-d068-4df3-b324-4b4cb6b60f5a

[snap_20250130]
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
machine: q35
memory: 4096
meta: creation-qemu=9.0.2,ctime=1738159040
name: tommie
net0: virtio=BC:24:11:32:51:C6,bridge=vmbr0v249
numa: 0
ostype: l26
scsi0: pmoxpool01:vm-100-disk-0,discard=on,mbps_wr=20,size=30G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=75333f37-e91a-4894-afad-66a6a47460bc
snaptime: 1738241169
sockets: 1
vmgenid: d0ff4518-c326-4eb2-84f0-96b802740c70
 
root@pve1-me:~# rbd snap create pmoxpool01/vm-100-disk-0@snap_20250130_1
I would recommend to use the official PVE tooling. (You are manipulating storage under the hood and it is no surprise PVE does not recognize it.)

Perhaps you find a working solution in "man qm" --> snapshot.