All disks move to VirtIO Block with IO thread: ON, SCSI Controller to Default (if set to VirtIO SCSI Single, even without disks, - system disks will randomly hangs). Look like it's weird bug with VirtIO SCSI drivers/emulation in Proxmox 8.
Temporary solution:
https://github.com/virtio-win/kvm-guest-drivers-windows/issues/623#issuecomment-2041014083
Because with VirtiIO SCSI it's absolutely unstable for now.
Have exactly same errors on two new Proxmox 8 servers:
First server:
96 x AMD EPYC 9454P 48-Core Processor (1 Socket), 384 GB RAM, 2 x INTEL SSDPF2KX038T1
Windows Server 2019 + MSSQL 2019
pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.13-3-pve)
pve-manager: 8.1.10 (running version...
Hi, i have make all as described on Proxmox 8, and have make few tests, login on GUI with wrong password, but it didn't block IP at all. Do you test this solution?
fail2ban-client status proxmox
Status for the jail: proxmox
|- Filter
| |- Currently failed: 0
| |- Total failed: 0
| `-...
It's ZFS pool on two NVME Samsung PM983 3.84 TB, on same pool there is anothers VMs (but not LXC), and 100 GB disks backuping for few seconds...
I have try already local and CIFS storage, same issue...
Trying to make backup of LXC machine (installed from debian-11-standart_11.0-1_amd64.tar.gz), but it's just hangs on this step forever:
INFO: starting new backup job: vzdump 102 --compress zstd --mode snapshot --remove 0 --node pve --storage storage
INFO: filesystem type on dumpdir is 'cifs'...
On two 5950x servers same... And on Unraid forums same problems, seems that bug with FreeBSD and Linux qemu. Switching CPU to EPYC ROME type fixing that issue.
I have change processor type to exactly architecture as on host (but not "host") - IvyBridge-IBRS:
IvyBridge:
Seems that benchmarking software has problems with results implementations or(and) some disagreement between systems with Spectre / Meltdown and other mitigations patches under...
I think - it's a definitely bug of an benchmarking software. I have change processor type from host to kvm64 and get very weird results:
You need more tests in real work environment (not in benchmarks) to make sure that performance hit is not so big, as you think.
I can agree, that Proxmox is very good (even - ideal) for UNIX-like systems virtualization. But for Windows OS's there is can be not expected (likely - bad) results. VMware not much better in that plane (performance hit still there), Hyper-V native is the best choice for Win VM. But Hyper-V is...
Hyper-V guest VM (2019):
Bare Metal:
So, if you need near (or even - better) bare metal results in AIDA64 benchmarks - you need Hyper-V host server )))
I have test in WinRAR:
Proxmox (6.3): ~7600 KB/s
Hyper-V (2019) ~ 8100 KB/s
Bare metal: ~8600 KB/s
Looks like 13% performance hit, not 20x as AIDA64 shows. So, i think, it's AIDA64 measurements bug in VM.
Hmmm.. i have test in my system (WS2019), and get very low speeds (almost 20x lower than on bare metal):
So, i think that it's AIDA64 bug, in other case, with that speeds. WS2019 boots takes very long time. In my case it's around 3-5 seconds....
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.