Search results

  1. D

    Slow backups from NFS share

    Yes, it's a slow network because for now I'm backing up these data on a remove PBS Server, so the uplink is 200 Mbit/sec. I know this is not a good situation, but I'm wondering why PBS should transfer the whole data everytime. These data are incremental and does not change so much, so an...
  2. D

    Slow backups from NFS share

    11 days and this is expected? Really?? if there is no way to speed up the backup, this means that Proxmox Backup Server is not usable for other things than VMs :-(
  3. D

    Slow backups from NFS share

    Hi, I'm using Proxmox Backup Client on a Debian virtual machine on Proxmox VE to backup a quite large NFS share (~ 10 Tb) mounted on the VM. The backup works, but it's very slow. First backup took a very long time (~ 12 days) and it's expected, but even next backups are taking so long: at the...
  4. D

    [SOLVED] "unexpected error on datastore traversal" error during garbage collection

    Apparently the file system was corrupted, I solved running a fsck.
  5. D

    [SOLVED] "unexpected error on datastore traversal" error during garbage collection

    Hi, I'm running Proxmox Backup Server 1.1-2, with storage on iSCSI. Backups are running without problems, but I cannot prune any old backup because I get this error: 2021-04-28T07:33:53+02:00: starting garbage collection on store mydatastore 2021-04-28T07:33:53+02:00: Start GC phase1 (mark...
  6. D

    ZFS big overhead

    So do you expect more than 50% overhead (the volume is occupied by about 15 Tb and it's taking 25,5 Tb of effective disk space)? It's 66% overhead! Even if the ZFS pool is thin provisioned into PVE? How I can exit of this bad situation? Thank you very much!
  7. D

    ZFS big overhead

    Hi, I have a ZFS rpool with 4 x 10 Tb drives on raidz1-0: root@pve:~# zpool status -v pool: rpool state: ONLINE scan: scrub repaired 0B in 1 days 04:24:17 with 0 errors on Mon Mar 15 04:48:19 2021 config: NAME STATE READ WRITE CKSUM rpool...
  8. D

    HTTP API Error 596

    Hi, I implemented the check_pve script from https://github.com/nbuchwitz/check_pve to have Proxmox monitored from my Icinga2 instance. Some checks run good, some others receives HTTP error code 596: root@node01:~/check_pve-1.1.3# ./check_pve.py -e node1 -u icinga2@pve -p password -k -m...
  9. D

    Backup to Proxmox Backup Server for one VM: dns error: failed to lookup address information: Name or service not known

    Yes, I was thinking about this workaround. But I want to understand why this happens. Is the VM freezing during the backup? Is it expected? I'm using the snapshot mode so I expect it does not freeze at all...
  10. D

    Backup to Proxmox Backup Server for one VM: dns error: failed to lookup address information: Name or service not known

    Yes! So I'm answering myself: is that VM freezed at the beginning of the backup to PBS?
  11. D

    Backup to Proxmox Backup Server for one VM: dns error: failed to lookup address information: Name or service not known

    Yes, it's been lasting since 3 days, when I activated backups on Proxmox Backup Server, and only for that VM. I did not try to restart that VM because unfortunately it's the perimeter firewall and I can restart it only if it's really necessary. All other VMs does not have such problem and...
  12. D

    Backup to Proxmox Backup Server for one VM: dns error: failed to lookup address information: Name or service not known

    Hi, I installed a Proxmox Backup Server and I configured a new backup storage on Proxmox VE 6.3-3. Backups for all virtual machines works good, but for only one which returns the following error: INFO: include disk 'virtio0' 'local-zfs:vm-101-disk-0' 32G INFO: backup mode: snapshot INFO...
  13. D

    High I/O delay during backups on ZFS

    Thanks for your reply! I know that I can trim my ZFS pool with zpool trim <poolname>, but why about trimming my vms? What do you mean exactly? Thank you very much!
  14. D

    High I/O delay during backups on ZFS

    Hi, I am encountering some problems making backups on many servers running ZFS. The PVE cluster is made of 4 nodes, every node have a ZFS mirror pool on 2 NVMe SSD drives and a full backup is made once per week at 00:30 on different days per each node: Every time a backup is started I see...
  15. D

    Ceph over multipath device or something else

    I was not aware on this. I will try and check, thank you!
  16. D

    Ceph over multipath device or something else

    Because it's not a shared storage on the Proxmox part, it's a local dir storage mounted on OCSF2. Proxmox is seeing that storage as a directory storage.
  17. D

    Ceph over multipath device or something else

    So, adding up: your advise is to use LVM (not Thin!) on the multipath mapper volume on every node, configuring it as "shared". This way I would have HA but not snapshots. Right?
  18. D

    Ceph over multipath device or something else

    OCFS2 is a shared file system. So basically every node is seeing the same storage in the same way and they can share the same files without conflicting each other. Yes, I know how Ceph works, but I was thinking that it would work on multipath block devices. It will work if I would have local...
  19. D

    Ceph over multipath device or something else

    Ok, but now using OCFS2 as a local directory storage I can have snapshots if I use qcow2 images. But I cannot have HA. So I have to choose between snapshots and HA, right? I cannot use CIFS or NFS because that FC storage is directly attached to Proxmox nodes. Ok Ceph is not a solution, I...
  20. D

    Ceph over multipath device or something else

    Ok, thanks! Yes, /dev/mapper/STORAGE-DATA is the block device from multimapper, so I would be able to create a physical LVM volume on it. Just one question: should I create an LVM or LVM-Thin volume? Because LVM will not give me snapshot, but LVM-Thin yes. So I don't need OCFS2, fine!