Search results

  1. cpzengel

    ZFS over iSCSI on Synology

    I am using it for backup local zfs Working for years I can post directions if you like
  2. cpzengel

    Autotrim crashes System on 5.3.13-1

    NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0...
  3. cpzengel

    Autotrim crashes System on 5.3.13-1

    NAME STATE READ WRITE CKSUM rpool DEGRADED 0 0 0 mirror-0 DEGRADED 5 3 0...
  4. cpzengel

    Changes between 6.0-11 and 6.1-5 (ZFS-Disks "unavailable" --> resilvering in cycles)

    please check this https://github.com/zfsonlinux/zfs/issues/8552 do you have autotrim enabled?
  5. cpzengel

    Autotrim crashes System on 5.3.13-1

    seem to be this one https://github.com/zfsonlinux/zfs/issues/8552
  6. cpzengel

    Autotrim crashes System on 5.3.13-1

    it also occurs an manual trimming
  7. cpzengel

    Autotrim crashes System on 5.3.13-1

    pve-kernel-5.3.13-1-pve (5.3.13-1) with autotrim=on on raid10 and raidz system freezes fix with import in installer and setting back to no trim used samsung 860 pro 2tb!
  8. cpzengel

    Change ZFS rpool HDDs to grow

    Hi Guys, please have a look at my way to change PVE´s bootable Disks in a Mirror zpool set autoexpand=on rpool # Enable Autoexpand on rpool cfdisk /dev/sdj # Delete Solaris 8MB Partition, its for saving Space if a replacement drive is minor smaller cfdisk /def/sdi # Delete Solaris 8MB...
  9. cpzengel

    Qdevice

    after removing and adding qdevice for the moment ok:)
  10. cpzengel

    Qdevice

    One Node fine, the other that mess. Any Idea? Quorum information ------------------ Date: Wed Nov 6 23:22:41 2019 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000002 Ring ID: 1/5139916 Quorate: Yes Votequorum information...
  11. cpzengel

    Qdevice

    its running on a pve6 non cluster, so i have to setup a own startscript?
  12. cpzengel

    Qdevice

    so its the autostart of the service thats failing running in foreground is working!
  13. cpzengel

    Qdevice

    -- The process' exit code is 'exited' and its exit status is 1. Nov 06 19:04:58 pve9 systemd[1]: corosync-qnetd.service: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- The unit corosync-qnetd.service has...
  14. cpzengel

    Qdevice

    Nov 6 19:01:48 pve1 corosync-qdevice[58006]: Unhandled error when reading from server. Disconnecting from server
  15. cpzengel

    Qdevice

    https://forum.proxmox.com/threads/setting-up-qdevice-fails.56061/ now starting, still no vote
  16. cpzengel

    Qdevice

    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote...
  17. cpzengel

    Qdevice

    i´ve missed a package on node is this ok so far? Quorum information ------------------ Date: Wed Nov 6 18:55:02 2019 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000001 Ring ID: 1.4e6dac Quorate: Yes Votequorum information...
  18. cpzengel

    Qdevice

    Hi, i´d like to add a QDevice to my two machine Cluster. I am failing in installing it on another single pve v6 because of the binary names have changed. So I tried it in a Debian 9 Container, but its Still Corosync V2 What´s the best Practice? Any Documentation? Cheers Chriz
  19. cpzengel

    Upgrade 5 to 6 mit Cluster

    Genau da bin ich mir nicht sicher. Deswegen frage ich. Die hab ich gelesen
  20. cpzengel

    Upgrade 5 to 6 mit Cluster

    Hi, sind auf PVE 5.2-9 und würde gerne auf V6 gehen. Frage. In der Doku steht man braucht Corosync 3 und das wäre bei PVE 5.2 schon kompatibel. pveversion sagt corosync: 2.4.2-pve5 Muss ich vor dem Upgrade sonst was machen oder einfach los? LG Chriz