Recent content by ubu

  1. U

    PDM 1.0.1 Error "missing field `data` at line 1 column 157" on PBS Storage Access

    Like in the title, i cet the error "missing field `data` at line 1 column 157" when i tried to view the content of the storage
  2. U

    NVIDIA L4 GPU (AD104GL)

    Just done exactly that for a customer ;) Not tested yet, bit nvidia-smi sees the card inside the container, so i think it will work additional lxc-config entries: lxc.cgroup2.devices.allow: c 195:* rwm lxc.cgroup2.devices.allow: c 234:* rwm lxc.cgroup2.devices.allow: c 509:* rwm...
  3. U

    PVE upgrade 6.4 to 7, 8, 9...

    Unfortunately SSDs have gone up in price quite a bit, but I would suggest a simple raidZ1 zfs mirror with enterprise SSDs, no need for additional boot drives. (Sorry for the confusion, i wanted to write mirror not RaidZ1, must have been more tired than i realized)
  4. U

    PVE upgrade 6.4 to 7, 8, 9...

    I think it should still work, maybe you need to use the Debian archive repository. But I think it will be easier to make backups of your virtual machines and containers, do a fresh proxmox install and restore your backups. If you use a completly new server or at least new disks, you avoid any...
  5. U

    [SOLVED] Multiple DKIM

    When will we get Support form multiple DKIM, DKIM per Domain
  6. U

    I recommend between 2 solutions

    With Ceph, you need fast networking for your Storage, 10 Gbit should be the absolute minimum, better 25 or 40 Gbit. Your data will be on all 3 nodes for redundancy, if one node fails you can still work, if 2 Nodes fail your ceph is no longer writeable. ZFS Replication works great for 2 Nodes...
  7. U

    Change zfs boot pool name

    zpool-rename.md: rpool is the original, zp_pve the new pool name # Rescue System booten zpool import rpool zp_pve -R /mnt -f for i in proc sys dev run; do mount -o bind /$i /mnt/$i ; done chroot /mnt rm /etc/default/grub.d/zfs.cfg sed -i s/quiet//g /etc/default/grub sed -i.bak...
  8. U

    Change zfs boot pool name

    You can change the pool name afterwards, but it is a bit of work
  9. U

    Change zfs boot pool name

    You can of course do this, but unless you have a specific reason, it might not be such a good idea. When you want to build a cluster, the pools must have the same name for migration to work.
  10. U

    Kritischer Ausfall – gleichzeitiger Reboot aller Nodes nach iSCSI-Verbindungsverlust (Proxmox + HA + Softdog)

    forum.proxmox.com/threads/adding-a-secondary-corosync-network-for-failover-setting-the-priority-when-adding-a-2nd-corosync-network-to-upgrade-to-a-redundant-ring-network.99688/
  11. U

    Kritischer Ausfall – gleichzeitiger Reboot aller Nodes nach iSCSI-Verbindungsverlust (Proxmox + HA + Softdog)

    Haben Sie mehrere redundante Ringe im Corosnc konfiguriert? in /etc/corosync/corosync.conf
  12. U

    ZFS Disk Replication - "command 'zfs snapshot (...)' failed: got timeout" NUR am Wochenende

    Selbes Problem, leider nicht nur am WE ;) Kann es am WE ein ZFS Scrub sein?
  13. U

    [TUTORIAL] ZFS Austauschen einer Disk

    Tip: Für die ZFS Pools die /dev/disk/by-id/ Namen statt sdX benutzen, ermitteln mit: ls -l /dev/disk/by-id |grep sde2 zpool status NAME STATE READ WRITE CKSUM rpool ONLINE...
  14. U

    Verzeichnis kann nicht angelegt werden - "Alle Disks benutzt"

    @brunnenkresse der 14 von 30 GB sind auf deiner 80GB Platte, damit Poxmox dei 500er Platte nutzen kann muss Sie in proxmox als Storage definiert werden, wie oben geschrieben. Wie @news schreibt wären ein paar angaben zum aktuellen Setup sehr hilfreich