Search results

  1. K

    ZFS performance regression with Proxmox

    This is the epic question... ...and what about the value of zfs_arc_max?
  2. K

    HA-Status in error state after backup

    I don't think this is useful. Another user, same problem, latest version: Is there a possibility to adjust timeout values for HA?
  3. K

    HA-Status in error state after backup

    No suggestions? Even a hint where to start may be useful. Today we've got almost the same situation, but with less VMs in error state. There is also a thread with the same error from someone else in the german forum: https://forum.proxmox.com/threads/backup-ha-error.51543 Cheers Knuuut
  4. K

    Backup HA Error

    Es sieht so aus als hätten wir das gleiche Problem. Mein Post hier wurde bisher auch nicht beantwortet. Ich verlinke das auch mal ins internationale Forum. Cheers Knuuut
  5. K

    Backup Bandbreite

    /etc/vzdump.conf bwlimit: KBPS Cheers Knuuut
  6. K

    HA-Status in error state after backup

    Hello Community, after backing up (proxmox backup function) about 120VMs in a 4-Node-Cluster via NFS, a few (~20)VMs shows the HA in error state. The affected VMs still running fine and there was no trouble at all while running the backup. All VMs are managed by HA. Here are the notifications...
  7. K

    ZFS performance regression with Proxmox

    Just keep in mind that the value of recordsize is the maximum that zfs cares about. In other words, the recordsize is dynamic to the maximum of this value. If your workload will use less, this would be ok. If you're also using compression, which is recommended in general, this may give you...
  8. K

    [SOLVED] BIG DATA ZFS (~200TB)

    I know this guy (not personally) and I don't agree him, not eyerything, but mostly. I don't want to start a new discussion on this and also I don't want to reinvent the wheel... ;-) I can only tell you what is best practice and do some recommendations and I think you're old enough to be...
  9. K

    [SOLVED] BIG DATA ZFS (~200TB)

    Again, this IS a bad idea. Count on ZFS! ZFS with HBA or RAID with anything else, make your descision. You want to archive a lot of files over a long time, so BitRot IS an issue. Do it, but do it right. Cheers Knuuut
  10. K

    ceph rbd overprovisioning

    rbd du -p <POOLNAME> Will show you the configured and used sizes of the images. Cheers Knuuut
  11. K

    [SOLVED] RaidZ10 (Raid50) lässt sich nicht erweitern

    Soweit ich weiß ja! Und das auch aus gutem Grund. Der typische Fehler an dieser Stelle ist ja ein "zpool add" mit nur 1 vdev anstatt beispielsweise ein "zpool attach". Sonst wären ja auch ganz wilden Konstrukten Tür und Tor geöffnet. BTW raidz1 mit nur 2 vdevs... It's not a bug... Cheers Knuuut
  12. K

    What I would like to see in Proxmox

    I'd like to see incremental and faster backups, more flexibel ressource monitoring, freenas-zfs-storage plugin,multi-cluster mamagement, more flexibel ceph management (partitions as OSDs, OSD/Pool grouping, multi ceph cluster, replication, etc.), better virtual disk management per vm, free beer...
  13. K

    What open source solutions are available to use "ZFS over iSCSI with Proxmox"?

    Take a look into the docs: https://www.ixsystems.com/documentation/freenas/11.2-legacy/network.html#lacp-mpio-nfs-and-esxi 2 Proxmox Hosts -> 1 Freenas host via nfs ~110MB/s each accessing at the same time All hosts with 1Gb NICs, Freenas host got 2 of them
  14. K

    [SOLVED] Performance Probleme / Ceph

    Host-OS auf SD? Swap auch? dmesg? Dem würde ich doch lieber eine kleine SSD spendieren.... Was wir noch gar nicht hatten: Wie sind deine SSDs angebunden? HBA oder RAID-Controller? Sind die Kabel ok? SAS-Expander & Einschübe ok? ... tbc
  15. K

    What open source solutions are available to use "ZFS over iSCSI with Proxmox"?

    I'm using rr (mode=0) with nfs and iSCSI. It works. Cheers Knuuut
  16. K

    [SOLVED] Performance Probleme / Ceph

    Das kann ich aus eigener Erfahrung bestätigen. Nodes mit wenigen OSDs im Vergleich zu anderen Nodes können einen Cluster ganz schön ausbremsen. Schau doch mal mal nach auf welchen OSDs rbd3 verteilt ist und setze das in den Vergleich zu den anderen rbds, die das Problem nicht haben. rbd info...
  17. K

    Defekte SSD aus ZFS Root-Boot-Pool durch grössere ersetzen?

    http://de.lmgtfy.com/?q=zpool+festplatte+austauschen Cheers Knuuut
  18. K

    Live migration broken for VMs doing "real" work

    @sseidel Do you have CPU and RAM hotplugging enabled? If so, try to disable it and/or configure you cdrom device to scsi. We've got bad experiences with this combination and getting rid of that ide-cdrom did the trick. Cheers Knuuut
  19. K

    Praxisfrage - 3 Node Cluster?

    aus man qm: qm migrate <vmid> <target> [OPTIONS] Migrate virtual machine. Creates a new migration task. <vmid>: <integer> (1 - N) The (unique) ID of the VM. <target>: <string> Target node. --force <boolean> Allow to migrate...
  20. K

    qm migrate -> deletes pve-zsync zfs snapshots

    ---with-local-disks explains everything. This would be a live migration including moving disks. This will not happen on shared zfs storage.