Hi all,
I've three servers with Proxmox and Ceph and installed them since the release of version 6.
This week I dedided to upgrade from 6.4 to 7 and, following your guide, I first upgraded ceph to Octopus, then PVE to 7, then Octopus to Pacific.
Everything seems to be fine, no errors, no warnings, except for the VM performances that are embarrassingly slow (in terms of disk read and write).
I tried to reboot, disable HA, poweroff all the VMs except the one I'm trying to use but none of these solutions has addressed my issue.
Just to give you an idea of the performances. here is a DD command with just 10 megabites on a simple CentOS VM:
Here is the distribution of the SSDs on the three hosts (2 per host)
here is the package versions of all of three servers
here is the ceph config
Please indicate me which files you need for better understanding my infrastructure.
Thank you in advance for your help.
Digitalchild
I've three servers with Proxmox and Ceph and installed them since the release of version 6.
This week I dedided to upgrade from 6.4 to 7 and, following your guide, I first upgraded ceph to Octopus, then PVE to 7, then Octopus to Pacific.
Everything seems to be fine, no errors, no warnings, except for the VM performances that are embarrassingly slow (in terms of disk read and write).
I tried to reboot, disable HA, poweroff all the VMs except the one I'm trying to use but none of these solutions has addressed my issue.
Just to give you an idea of the performances. here is a DD command with just 10 megabites on a simple CentOS VM:
Code:
[cdi@serverg]$ dd if=/dev/zero of=/tmp/test1.img bs=1M count=10 oflag=dsync
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 4.95152 s, 2.1 MB/s
[cdi@serverg]$
Here is the distribution of the SSDs on the three hosts (2 per host)
Code:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 ssd 0.46579 1.00000 477 GiB 268 GiB 267 GiB 1.1 MiB 676 MiB 209 GiB 56.13 1.14 74 up
1 ssd 0.46579 1.00000 477 GiB 201 GiB 201 GiB 4.7 MiB 681 MiB 276 GiB 42.21 0.86 55 up
2 ssd 0.46579 1.00000 477 GiB 223 GiB 222 GiB 3 KiB 732 MiB 254 GiB 46.77 0.95 61 up
3 ssd 0.46579 1.00000 477 GiB 246 GiB 245 GiB 1.1 MiB 747 MiB 231 GiB 51.60 1.05 68 up
4 ssd 0.46579 1.00000 477 GiB 242 GiB 241 GiB 0 B 772 MiB 235 GiB 50.71 1.03 66 up
5 ssd 0.46579 1.00000 477 GiB 227 GiB 227 GiB 4.1 MiB 629 MiB 250 GiB 47.64 0.97 63 up
TOTAL 2.8 TiB 1.4 TiB 1.4 TiB 11 MiB 4.1 GiB 1.4 TiB 49.18
here is the package versions of all of three servers
Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-7
pve-kernel-helper: 7.0-7
pve-kernel-5.4: 6.4-6
pve-kernel-5.11.22-4-pve: 5.11.22-8
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 16.2.5-pve1
ceph-fuse: 16.2.5-pve1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.3.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-6
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-11
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.9-2
proxmox-backup-file-restore: 2.0.9-2
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-3
pve-firmware: 3.3-1
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
here is the ceph config
Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.170.31/24
fsid = c01cf831-1afa-430f-b8d7-96cd37903e0b
mon_allow_pool_delete = true
mon_host = 192.168.169.31 192.168.169.32 192.168.169.33
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.169.31/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.geocluster1]
public_addr = 192.168.169.31
[mon.geocluster2]
public_addr = 192.168.169.32
[mon.geocluster3]
public_addr = 192.168.169.33
Please indicate me which files you need for better understanding my infrastructure.
Thank you in advance for your help.
Digitalchild