I modified the corosync systemd service (located /lib/systemd/system/corosync.service) to auto-restart corosync every 12 hours by adding the following in the [Service] section:
Restart=always
WatchdogSec=43200
Steps followed:
vim /lib/systemd/system/corosync.service
<add the above to the...
I'm experiencing the same symptoms as described by others in this thread in my recently-upgraded pve v6 3-node cluster.
Restarting corosync seems to resolve the issue. Prior to restarting corosync today, I noticed that the corosync process was running at 100% CPU.
I pass an nvme device partition to a KVM guest for kubernetes rook/ceph and it works great. This is how I did it:
For a given nvme partition, (in this case nvme0n1p4 which is identified as /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K206387N-part4) added the following to the...
I'd like to report that I got the corosync-qdevice thing to work for my 2-node cluster.
Previously I was using the raspberry-pi-as-a-third-node approach which seemed like a hacky solution. The dummy node shows up in the proxmox cluster info as unusable nodes (because they are) and it blocks me...
@dcsapak, this is the most recent thread related to this that I could find - apologies for responding to an old thread.
I've been trying to figure out how to make this work in my proxmox 5.1 system but so far have not had any luck. Like you stated, simply starting the VM with the appropriate...
I'm experiencing this issue as well. It appears that zfs 0.7.6 corrects this for at least one person. Using the pvetest repo and associated backported zfs patches didn't seem to do the trick.
@fabian do you know when/if we can expect to see zfs 0.7.6 included in pvetest?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.