Yes. but that would only impact migrations, not performance.Can 7.3 participate in a cluster with 7.4 does anyone know?
Hardware description is in the original post unless you need more info than that.maybe we should go back to basics.
Please post your vmid.conf. Also useful would be a description of your hardware.
Yes. but that would only impact migrations, not performance.
rdware are two HP DL380 Gen 9 servers with 128GB of ram with a couple of mirrored 1TB hard disks for booting the OS (the servers came with the drives so we just used what we had - overkill but whatever). The storage they are connected to is a TrueNAS machine running on one of the Dell R510s with a Perc H200 in IT mode with a ZFS Raid Z1 with 10x 3TB drives and a Cache VDEV on a 120GB SSD.
UPDATE: I moved one of the VMs to the local storage eliminating the shared storage and it is still very slow so now I'm at a loss - Windows 10 VM running on the HP hardware and local RAID 10 and very slow.
HP enterprise 7.2k sas drives.Apologies, didnt see that. What kind of disks are in the local raid10?
Yes, I believe so - I'm not running Windows 10 on the one that I'm not having problems on either. It is Windows 7 and Windows XP guests - and I just installed a Windows 7 guest on this one and it's lightning fast as well - so it's something with the Windows 10.the only thing I see that gives me pause is the 2 sockets in your config. Set it to 1, and enable NUMA. It doesnt look like ram balooning is enabled, but double check that it isnt.
beyond that, the slowness you describe is probably in the guest itself. to verify, install a fresh windows vm and compare the behavior.
The majority are 1x4 cores and most are between 4 and 8gb of memory - only the servers are allocated up to 32gb. We have 64gb allocated to the SQL server we are running.How many similar VMs with 2x4 cores and 16GB of memory do you run on 128GB and how many physical cores? Why the two virtual sockets without enabling NUMA? Does your system have a NUMA and/or multiple sockets? Note that ZFS takes half of your memory until you limit and Proxmox works best when the VM allocations are less than 80% of memory.
I read your reply as a few Windows Servers with 32GB plus one with 64GB plus several desktop VMs with on average 6GB. That way more than the 62GB you have available per host (after 64GB for ZFS and 2GB for Proxmox).The majority are 1x4 cores and most are between 4 and 8gb of memory - only the servers are allocated up to 32gb. We have 64gb allocated to the SQL server we are running.
The proxmox host itself is not running ZFS. It's running on the shared storage - different server.I read your reply as a few Windows Servers with 32GB plus one with 64GB plus several desktop VMs with on average 6GB. That way more than the 62GB you have available per host (after 64GB for ZFS and 2GB for Proxmox).
If you don't want to share the exact numbers, maybe you can add the allocated memory yourself and check the zfs_arc_max? Proxmox does not work well when over-committing on memory.
If you have already tried running just a single (small) VM on a host and that was still very slow, then it's probably not the memory (yet). Apologies if you already ruled this out and I missed it in this thread. I'm just a random stranger on the internet and guessing here.
pretty unlikely. the forums are full of user reports of windows 10 guests without issue; I myself dont virtualize windows workstation guests so I cant speak directly.so it's something with the Windows 10.
I can't speak to Proxmox and Windows 10 but I can say for certainty that it is an issue on VMWare ESXi and I can say that in this install the finger is starting to point strongly in Windows 10's direction. I am about to do a fresh install of Windows 10 on my 7.3 install and see if I have similar issues.pretty unlikely. the forums are full of user reports of windows 10 guests without issue; I myself dont virtualize windows workstation guests so I cant speak directly.
Just the integrated RAID controller, but it is not in the design for the VMs to run off the internal storage. There is a 10GE nic on a separated 10GE network going to a TrueNAS NFS share for shared storage. We moved it to local storage to attempt to eliminate the shared storage as a possibility.What local storage controller on your HP DL380 Gen 9 ?
is host use swap host ?
couple of mirrored 1TB hard disks for booting the OS
setup with zfs or hw controller ?