On all layers?
There are settings in the guest as well as on the host and also on your client. Just to make sure...
Also some people suggest to turn off ncq on the disk when using ZFS (on the host) as well as disabling queueing - all that makes things slower but more predictable typically.
I'm...
This kind auf saw-teeth graph typically indicates some caching taking place. Once the cache is saturated the performance drops, once the cache is emptied or reaches its low-watermark it starts buffering again. I have seen this literally a dozen of times. Caching can happen on the filesystem...
I guess you see write amplification issues and the fact that ultimately you will cause Random-IO, which a HDD is not particular good at.
A single vdev in the pool means everything goes onto it. Including the logging of ZFS...
Does the achieved speed give you trouble?
Or are you just concerned...
Start reading this article:
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
What you want to do is a p2v migration.
There are various options. Likely you will need two or even more tries but you will get there ;)
It is Important to be precise, especially in these kind of discussions.
I wasn't so sure about that, hence I wrote it again.
I am Spending time trying to help and now you are complaining about me trying to do so?
Bummer...
You wont, because you can't use 16T within the VM on a 16Thread system.
You will always compete with the host. Whatever you think the system is doing - that is not how a hypervisor operates. Please - I don't want to mess with you or play games. I just try to explain you have a fundamental glitch...
That depends in the workload as well.
Threads are giving you some benefits. But if it is 25% then this is a lot.
Let's assume 25% though.
So (totally oversimplified) 8Threads * 0,25 = 2Cores
Those 2 Cores plus the 8 (real Cores)
Total Core equivalent is 10 Cores - 60% would be (in this totally...
But that topic isn't simple. It is complex and even if you have a simplification at first: Latest if people complain abaut performance issues and very odd symptoms that need to understand the concepts anyways.
It is no binary choice. In the field There is no right or wrong. And there are a lot...
IMHO no. Because that means the VM is always competing with the host, who is responsible for networking and other things. So it might just lag, hang, or provide rubbish user experience (depending what you are doing).
Also consider that threads are no real cores. So this might influence...
It starts with the fact that wear-leveling is (at least for the mixed use SSDs) working different. DC SSDs are built for steady state performance, not peak-performance as it is typically done for consumer SSD.
Due to the nature of ZFS the SSDs constantly receiving writes. From my perspective...
Imho. You are bringing a knife to a gunfight.
Consumer ssds and zfs are not playing nicely together. ZFS was built for datacenter usage. Consumer ssds are built for, well, consumers...
Zfs has a huge write amplification through the journal approach. Basically every write is multiplied.
There...
https://wiki.archlinux.org/title/LVM_on_software_RAID
Otherwise you could just try mdadm.
Raidz has penalty. It ia the same as in Raid5/6. you Gould try to add a special devoce but i am not sure if that helps you
Smart values of disks behind a raid controller always are a challenge, as the physical disk is masked an replaced by a virtual (RAID).
You are using smartcli on your commandlibe. Pve does grab the data with native tools and those don't look beyond the representation of your LUN
Freezes are tough in my opinion.
You often don't get the real source of them into logs.
I'd start with memory pressure and see if you can find hints on that.
Dont do that. Virtualization is not bare metal!
Scheduling overhead is to be considered. Additionally keep the host in mind which always needs cycles too.
My suggestion: only assign 56 and see if it gets better
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.