Right, so I disabled the iothread option on the individual disks, yet the controller type is still set to virtio single, and the guest Windows still shows 3 controllers connected to the 3 disks. Backup completed successfully. Does this config mean I'm still getting one IO thread for each disk...
This is the essential CPU stall console message, but as it passed relatively fast it didn't freeze up the kernel:
[301703.260023] INFO: rcu_sched detected stalls on CPUs/tasks: { 1} (detected by 2, t=8978 jiffies, g=2886633, c=2886632, q=265)
[301711.937277] [<ffffffff810d259a>] ...
So I need to select VirtIO SCSI single in Options, and also enable IO Thread for every disk in Hardware to ensure iothread is working?
Is there any other limitation (cache type, etc.) for iothread? Right now my VM is on zfs-local, with cache=writeback.
Is there any reliable way to verify that...
Since we upgraded our cluster to PVE 4.3 from 3.4, all our OpenVZ containers have been converted to KVM virtual machines. In many of these guests we get frequent console alerts about CPU stalls, usually when the cluster node is under high IO load (for example when backing up or restoring VMs to...
As both issues are resolved now, when can we expect a new Ceph version in PVE? The pve-no-subscription repo for PVE 4.3 still contains Ceph 0.80.7, which is exactly 2 years old...
And what about GlusterFS, where 3.5.2 is included, which is EOL for 4 months now, as 3.8 LTM is out since June...
Sounds safe enough to me.
So are you going to provide a patch to Proxmox that gets into the test repository, or how is this going to work?
I will be more than happy to test.
Yes, PVE 3.x shows 12 rows where 4.x shows only 10.
Doesn't sound like a lot, but when you have to manage a lot of guests, it quickly adds up and you have to scroll a lot more. So I would very much like an option to set font (and list item) size in the new GUI.
I think there is a more important issue here than the new navigation, which looks a bit unconventional and takes some getting used to, but I personally think is actually better from a usability standpoint (your eyes need to scan a much smaller area to find the next click). What I personally...
This has been talked to death before in this thread, you might want to read first and comment later. Ceph and Gluster are very useful, but only for backups or very low IO guests (at least on our bonded gigabit infrastructure).
Also, this has nothing to to with the fact that block migration code...
Why would it start if a node is down? No one said it would be automatic, or in any way connected to HA. It would only be faster than it currently is, with much less downtime.
In Proxmox 4.x a KVM guest can only be migrated fully offline from local storage, which is unacceptable and lags behind...
Actually, LXD + LXC 2.0 has some solutions for this problem, now the question is can they be implemented in Proxmox? More in this thread:
https://forum.proxmox.com/threads/revisiting-cpu-limit-for-lxc-2-0-lxd.28736/
I have also commented the relevant link in the bugzilla.
Hyper-V also supports it:
"You can also perform a live migration of a virtual machine between two non-clustered servers running Hyper-V when you are only using local storage for the virtual machine. (This is sometimes referred to as a “shared nothing” live migration. In this case, the virtual...
https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench
Sysbench is a good choice, please share your benchmark numbers when you have done the testing!
After updating a two node cluster to 4.3, I have rebooted the nodes one by one (not at the same time). After reboot none of the VM's were running, trying to start them on any node gave a cluster error:
root@proxmox2:~# qm start 111
cluster not ready - no quorum?
Checking the cluster showed...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.