10GB Configuration.

compuls1v3

New Member
Aug 19, 2021
10
3
3
43
So I've configured 3 nodes, with 2 slow spinning disks, and 2 SSDs per host. There is one Ceph group, for the SSDs. Each server has 4x1GB NICs, with one port configured for management. They also have 1 dual 10GB NIC per server, configured for the cluster network. My question is, when I do a migration (with insecure mode enabled), should I be getting faster speeds than this?
1632317473139.png

Also, how do I change the migration cachesize to be bigger?
 
You seem to get around 320 MiB/s roughly averaged, multiplied with 8 (bits per byte) that would be ~ 2.5 Gbps, so if that is the upper limit of what's practical possible depends on what network you configured as migration-network (Datacenter -> Options) and what other traffic is happening there.

Note that available memory and CPU speed for QEMU's guest-state serialization resources may also have quite some impact and can be the actual limiting factor.
 
Last edited:
  • Like
Reactions: compuls1v3
Thank you for the reply. I changed the setting Maximal Works from 4 > 64, and now it transfers within a few seconds.
1632427346881.png
 
That makes honestly no sense to me. The worker settings is only used to determine how much parallel start/stop/migrate actions can be done by the HA stack or when triggering a bulk-action, but it does not affect a single migration itself at all.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!