Hey.
We have a 4 node proxmox/ceph cluster (ceph on 40g nics, prox interconects on 10g nics, internet is dual 1g nics)
ceph is 8x2tb SM863a drives
Problem is the snapshots, I don't use them much but wanted to test before we allow clients on here:
This part is quick:
/dev/rbd4
saving VM state and RAM using storage 'Ceph-RBDStor'
1.51 MiB in 0s
836.81 MiB in 1s
1.69 GiB in 2s
2.57 GiB in 3s
3.49 GiB in 4s
4.33 GiB in 5s
5.13 GiB in 6s
5.93 GiB in 7s
6.88 GiB in 8s
7.83 GiB in 9s
8.76 GiB in 10s
9.67 GiB in 11s
10.59 GiB in 12s
11.50 GiB in 13s
12.37 GiB in 14s
13.23 GiB in 15s
==== Now it's been sitting here for 8m 59s min (just finished), here is the rest of the log:
completed saving the VM state in 18s, saved 13.97 GiB
snapshotting 'drive-scsi0' (Ceph-RBDStor:vm-101-disk-1)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
snapshotting 'drive-efidisk0' (Ceph-RBDStor:vm-101-disk-0)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
snapshotting 'drive-tpmstate0' (Ceph-RBDStor:vm-101-disk-2)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
TASK OK
==== What causes it to take that long?
root@pve1-cpu1:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
root@pve1-cpu1:~#
We have a 4 node proxmox/ceph cluster (ceph on 40g nics, prox interconects on 10g nics, internet is dual 1g nics)
ceph is 8x2tb SM863a drives
Problem is the snapshots, I don't use them much but wanted to test before we allow clients on here:
This part is quick:
/dev/rbd4
saving VM state and RAM using storage 'Ceph-RBDStor'
1.51 MiB in 0s
836.81 MiB in 1s
1.69 GiB in 2s
2.57 GiB in 3s
3.49 GiB in 4s
4.33 GiB in 5s
5.13 GiB in 6s
5.93 GiB in 7s
6.88 GiB in 8s
7.83 GiB in 9s
8.76 GiB in 10s
9.67 GiB in 11s
10.59 GiB in 12s
11.50 GiB in 13s
12.37 GiB in 14s
13.23 GiB in 15s
==== Now it's been sitting here for 8m 59s min (just finished), here is the rest of the log:
completed saving the VM state in 18s, saved 13.97 GiB
snapshotting 'drive-scsi0' (Ceph-RBDStor:vm-101-disk-1)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
snapshotting 'drive-efidisk0' (Ceph-RBDStor:vm-101-disk-0)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
snapshotting 'drive-tpmstate0' (Ceph-RBDStor:vm-101-disk-2)
Creating snap: 10% complete...
Creating snap: 100% complete...done.
TASK OK
==== What causes it to take that long?
root@pve1-cpu1:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
root@pve1-cpu1:~#