Hi Alwin,
The attachment to the initial post contains the output of lspcu showing "NUMA node(s): 1"
qperf was indeed new to me - will have a look at it.
Took some time, but found the difference between the hosts: On the first two from previous tests some /etc/sysctl.d/ parameter files had been left behind...
Result from the over night run
proxmox05 seems flaky, proxmox06 seems to uses less CPU than the rest. We need to check for configuration missmatches, it seems the nodes still have differences in their configuration.
Adjusted the network settings according to AMD Network Tuning Guide .
We did not use NUMA adjustments as we only have single socket CPUs.
So this is a Zabbix screen over the three nodes. Details over the time ranges:
- From 07:00 - 10:25: Performancetest running over the weekend. promox04...
First test is to see if the network configuration is in order. 100GBit is 4 network streams combined, so at least 4 processes are required to test for the maximum.
root@proxmox04:~# iperf -c 10.33.0.15 -P 4
------------------------------------------------------------
Client connecting to...
Hi everbody,
we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration.
We purchased 8 nodes with the following configuration:
- ThomasKrenn 1HE AMD Single-CPU RA1112
- AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB)
- 512 GB RAM
- 2x 240GB SATA...
Thanks for this second benchmark! It gives a clear impression what should be achievable with current hardware.
I am currently trying to run that exact benchmark setup on a three node cluster and have problems running three rados bench clients simultaneously.
Can you adjust the PDF and give...
Ich hänge mich mal hier dran, weil ich nach dem Upgrade von 5.4 auf 6.1 den gleichen Fehler hatte.
Die Konfiguration hat vor dem Upgrade bereits funktioniert, nach dem Upgrade kam "Failed to send data to Zabbix" und der Ceph Status war dann auf Warning.
Problem bei uns war die noch fehlenden...
Hi,
after reading your posts I reconfigured my Windows 7 and Server 2008r2 systems to use SATA on Ceph RBD storage.
...
bootdisk: sata0
ostype: win7
sata0: ceph-proxmox-VMs:vm-106-disk-0,cache=writeback,discard=on,size=30G
scsihw: virtio-scsi-pci
...
Using...
So the next conversion with another VM just went smoothly with only two reboots.
Steps:
Create new VM on Proxmox
Create IDE raw Harddisk on the NetApp NFS storage, samesize as on the to be converted machine
Create one small SCSI qcow2 Disk to be able to properly install VirtIO-SCSI drivers...
By setting the Cache on the Harddisk from "Default (no cache)" to "Write through" the live migration works now... WTF?!?!
So, cool!!! Thanks Udo for the excellent hint!
I will try now yet another VM to check if I can reproduce the minimized downtime approach.
Hi Udo,
cool suggestion! Tried it...
So the filesystem is actually accessible within the VM - but the disk move borks now out with a different error:
Online Migration:
create full clone of drive scsi1 (netapp03-DS1:111/vm-111-disk-0.raw)
drive mirror is starting for drive-scsi1
drive-scsi1...
Seems like the issue with the not working offline copy is the creation of the destination raw file with the wrong size.
From the gui:
Virtual Environment 5.3-8
Storage 'ceph-proxmox-VMs' on node 'proxmox01'
Search:
Logs
()
create full clone of drive scsi0 (netapp03-DS1:111/vm-111-disk-0.vmdk)...
Hi,
moving it offline to the final destination also works
root@proxmox01:/var/lib/vz/images# /usr/bin/qemu-img convert -p -f vmdk -O raw /mnt/pve/netapp03-DS1/images/111/vm-111-disk-0.vmdk...
Hi,
thanks for getting back so quickly!
Converting onto same storage:
Virtual Environment 5.3-8
Virtual Machine 111 (win2008r2) on node 'proxmox01'
Logs
scsi0
create full clone of drive scsi0 (netapp03-DS1:111/vm-111-disk-0.vmdk)
Formatting...
Hi,
tried to move a disk just now with the current ProxMox (5.3-8).
Online:
Virtual Environment 5.3-8
Virtual Machine 111 (win2008r2) on node 'proxmox01'
Logs
create full clone of drive ide0 (netapp03-DS1:111/vm-111-disk-0.vmdk)
drive mirror is starting for drive-ide0
drive-ide0: transferred...
Hi virtRoo,
after starting this thread yesterday I upgraded to 5.3-8. I will try another VM today...
I understand that I am moving from one virtualisation platform to another one so I have no problem with a short downtime. But most of my live VMs are rather big and I need some best practise...
Hi,
we built a three node ProxMox cluster with Ceph as storage backend for our test VMs. They currently reside on a VMware two node cluster with an old NetApp as storage backend.
Plan is to try to migrate/convert - as preparation for a possible migration of our productive VMs - with minimal...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.