Thanks for testing that. Yall are right with that and i could replicate that. Still, the Performance in the Windows VM was sluggish and just overall slow. Cinebench was fine tho. I would like todo more testing, but tbh idk what todo anymore and i just slapped more money on the Problem and Bought...
As far as i know (didnt test myself) ESXi has not a major Performance Penalty for Windows or Unix VMs. So its the best compromise from both worlds. I would really like to have someone from the Proxmox Staff to look into that, as its a major Problem in the Hypervisor. Cant be the case, that the...
Even if AIDA has a Problem there, that doesnt explain how 3 different Tests in the Phoronix Test Suite take like 10 Times longer, than in a Linux VM. Makes no sense. Proxmox has a Problem and I´m not the only one "complaining" about sluggish Performance, even with all Drivers and GPU...
It's not. Phoronix Test Suite shows the same bad Performance in a various of tests, at least for me.
Proxmox has just really bad Windows VM Performance and I, for myself, am going to switch to ESXi, which is a shame, because then i need another VM for ZFS Stuff.
I see. Then lets find out how to replace glibc lol.
I didnt see any Downsides till now, but neither did i see any Upsides. Windows VM is still slow as heck, Gaming is still impossible. Nice try tho :)
Is this expected output ?
root@pve:~/temp/5.11.5_westmere# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.11.5-1
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/cmdline found...
Thanks m8, appreciate it!
Will test the Kernel out, but first i got some questions.
What do we expect to happen with this new kernel?
Are there Drawbacks?
Can i update in a normal fassion, or will it remove the new kernel?
Does this mess with ZFS?
Do i need to setup PCIe Passthrough again ?
And...
I really appreciate your effort, but idk if i want to run a custom kernel from someone on the Interwebzz lol. Maybe you can first tell me what you customized ?
Thanks anyways and have a good sleep, till l8r :)
Yeah, i tried like everything i could imagine. Latest thing i did was disable IPv6 in the VM and updated to the latest virtIO drivers (v0.190). Didnt do the trick. Checked Bios like 12 Million times. Nothing i can see. Hugepages enabled or disabled is irrelevant for that Problem. ZFS is limited...
In theory and thats all it is, a theory, you can just increase sda2 and sdb2 to you desired size and leave the rest alone. After that try again with zpool online -e rpool sda2. Repeat the command for sdb2.
Ohh, so you want to extend the Rpool ?
Well, idk if the instruction then work as intended. I thought you want to expand a seperat pool, besides rpool. Well, that makes things complicated, at least for me, as i didnt have done that at all.
To have a "clean base", as I dont know, what exactly you have done to the disks before.
You said you replicated the tables n stuff, so yeah.. In my opinion thats completly unnecessary. If you want to expand, then just put clean disks in, with no gpt or something on there and just let zfs handle...
That sounds like a really complicated Way of doing what you want.
Here is, what i personally would do, if i where in the state you are now:
Pull out one Disk and clear it completely, no gpt, no partition.
Throw it back in, and use the string given by ls /dev/disk/by-id to replace the disk.
For...
sherminator is right. You can test that scenario easily on your own PC. Copy a file Folder to Folder on the Same HDD and you will see speeds crippling. On an SSD thats not such a big impact, but still noticable. Its just the worst case for HDDs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.