Recent content by Jeff Billimek

  1. J

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I modified the corosync systemd service (located /lib/systemd/system/corosync.service) to auto-restart corosync every 12 hours by adding the following in the [Service] section: Restart=always WatchdogSec=43200 Steps followed: vim /lib/systemd/system/corosync.service <add the above to the...
  2. J

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Having similar issues with corosync3 consuming over 6GB of memory on two of my proxmox v6 nodes: .. this is with trying the `secauth: off` thing
  3. J

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I'm experiencing the same symptoms as described by others in this thread in my recently-upgraded pve v6 3-node cluster. Restarting corosync seems to resolve the issue. Prior to restarting corosync today, I noticed that the corosync process was running at 100% CPU.
  4. J

    nvme partition to kvm quest as storage device

    I pass an nvme device partition to a KVM guest for kubernetes rook/ceph and it works great. This is how I did it: For a given nvme partition, (in this case nvme0n1p4 which is identified as /dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3X4NB0K206387N-part4) added the following to the...
  5. J

    PVE5 and quorum device

    I'd like to report that I got the corosync-qdevice thing to work for my 2-node cluster. Previously I was using the raspberry-pi-as-a-third-node approach which seemed like a hacky solution. The dummy node shows up in the proxmox cluster info as unusable nodes (because they are) and it blocks me...
  6. J

    Proxmox VE 5.0 beta2 released!

    How are things looking for this with the latest version of things included in PVE 5.2?
  7. J

    Proxmox VE 5.0 beta2 released!

    @dcsapak, this is the most recent thread related to this that I could find - apologies for responding to an old thread. I've been trying to figure out how to make this work in my proxmox 5.1 system but so far have not had any luck. Like you stated, simply starting the VM with the appropriate...
  8. J

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    I'm experiencing this issue as well. It appears that zfs 0.7.6 corrects this for at least one person. Using the pvetest repo and associated backported zfs patches didn't seem to do the trick. @fabian do you know when/if we can expect to see zfs 0.7.6 included in pvetest?