Hello,
I’m new to Proxmox and have been struggling to figure out the best setup
We have 3 identical servers to replace our current setup, hosting 2 Windows VM’s and 3 Linux VM’s to ensure we have some localised redundancy / HA. They were purchased with the intention to go hyperconverged with Proxmox.
Following the info from the forum, Ceph Benchmarks, Website and Performance Tweaks. I have done the following, although as some of the posts are quite old wasn’t sure if some of the tweaks were still appropriate on newer version of Proxmox and Ceph
I have hdparm -w 0 on all drives.
OSD’s created without encryption, although I plan to recreate with Encryption
Cephx disabled: This slowed down rados speed results
Disabled debug logs in configuration
Set Ceph Pool to use krbd
Moved DB/WAL to spare space on the 970Evo 4 x 30Gb for the 4 osd’s but that slowed down the ceph pool and realised that causes quite a serious single point of failure.
Any advice would be great,
Servers: CyberServe AMD EPYC EP1-108S
CPU:AMD EPYC 7282
Mainboard: H11SSL-i (rev2)
Network: Dual Gigabit LAN, Intel 10/40 Gigabit XL710-QDA2 - Dual Port QSFP+ Meshed for Ceph (24.5Gb/sec)
Memory: 32GB 2933MHz
Disk 1 x 250GB Samsung 970, 4x 1.9TB Intel SSD S4610 Mainstream DataCentre Sata3
TPM 2.0 - Trusted Platform Module with TCG 2.0
Pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.11-pve1
ceph-fuse: 15.2.11-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-4
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
I’m new to Proxmox and have been struggling to figure out the best setup
We have 3 identical servers to replace our current setup, hosting 2 Windows VM’s and 3 Linux VM’s to ensure we have some localised redundancy / HA. They were purchased with the intention to go hyperconverged with Proxmox.
Following the info from the forum, Ceph Benchmarks, Website and Performance Tweaks. I have done the following, although as some of the posts are quite old wasn’t sure if some of the tweaks were still appropriate on newer version of Proxmox and Ceph
I have hdparm -w 0 on all drives.
OSD’s created without encryption, although I plan to recreate with Encryption
Cephx disabled: This slowed down rados speed results
Disabled debug logs in configuration
Set Ceph Pool to use krbd
Moved DB/WAL to spare space on the 970Evo 4 x 30Gb for the 4 osd’s but that slowed down the ceph pool and realised that causes quite a serious single point of failure.
Any advice would be great,
Servers: CyberServe AMD EPYC EP1-108S
CPU:AMD EPYC 7282
Mainboard: H11SSL-i (rev2)
Network: Dual Gigabit LAN, Intel 10/40 Gigabit XL710-QDA2 - Dual Port QSFP+ Meshed for Ceph (24.5Gb/sec)
Memory: 32GB 2933MHz
Disk 1 x 250GB Samsung 970, 4x 1.9TB Intel SSD S4610 Mainstream DataCentre Sata3
TPM 2.0 - Trusted Platform Module with TCG 2.0
Pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.11-pve1
ceph-fuse: 15.2.11-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-4
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1