Same issue in 6.0!
root@pve2:~# pveversion
pve-manager/6.0-5/f8a710d7 (running kernel: 5.0.15-1-pve)
There was some messages in dmesg on friday:
[794123.204085] INFO: task kworker/u82:3:21616 blocked for more than 120 seconds.
[794123.204122] Tainted: P O 5.0.15-1-pve #1...
I think that more replicas does not mean less speed. Read requests are spread between all replicas, so more replicas -- higher read speed. But you have more network load on write.
So it's more tradeoff between durability and price.
AFAIK, this message is harmless if you do not do guest kernel's profiling. MSR 0x345 is IA32_PERF_CAPABILITIES.
See https://bugs.launchpad.net/qemu/+bug/1208540 for workaround.
I have many of these:
dmesg | grep rdmsr | wc -l
45
but all works fine.
You can safely use cpu: host if your HA...
cfq with proititizing may be better than without it, but for most high-bandwidth workloads they both are worse than deadline. I added elevator=deadline to my PVE's kernel cmdline instead of playing with cfq priorities.
With sshfs there may be some concurrency issues if you access same data from different places at same time. It's just didn't designed for that. But for read-only data it should be ok.
I don't know about partition layout.
As for second question, it's may depend on your workload, but in most cases answer is "no". OK, it MAY be better to use xfs instead of ext4 for huge filesystem with many small files, but only if they're in OpenVZ containers. But then you may have some...
What exactly you expecting to be updated?
There's no global options concerned with RAM. Host does not concerned about how huch RAM it has, it just uses all.
Configs of guests will not be updated automatically, so if you want guests to use more RAM after host uprade, you have to update their...
2) Host network is used.
3) and 4) are pretty old technologies so i dont see any reasons for them to be unstable.
BTW, ioat highy depends on application socket buffer sizes. You can try use it for lesser buffers by tuning /proc/sys/net/ipv4/tcp_dma_copybreak (sysctl net.ipv4.tcp_dma_copybreak)...
First two will work with PVE.
M5110 is best of them, but I'd recommend also battery|flash backup for it. Without backup it's not much better than M1115 on many workloads.
Last isn't really RAID controller, so don't expect much stability or performance from it. But in non-RAID mode it will also work.
Using VFs for PCIe passthrough, you will lose live migration...
Don't see any reasons to use VFs for host communications. You can do it all with software.
But i'd like to know how bonding will work on top of SR-IOV intefaces. ;)
And yes, Intel's X520 is best 10G SFP+ adaper I know.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.