Search results

  1. C

    after daily backup fails Node is not usable

    sorry for late response (testing is difficult like already stated) After fixing the object map it looks working again, TX ! Really wondering why those kind of errors are not shown at the ceph monior or somewhere else where it is really viewable.
  2. C

    after daily backup fails Node is not usable

    :~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,vztmpl,iso rbd: ceph content rootdir,images krbd 1 pool ceph cifs: backup_server path /mnt/pve/backup_server server 10.24.123.123 share Proxmox...
  3. C

    vzdump locking up to NFS share on 6.3-3

    Just want to add that the same symptomatic happens at cifs too https://forum.proxmox.com/threads/after-daily-backup-fails-node-is-not-usable.77679/ I also found similar sounding threads here in the forum, all starting with 6.2 till today - most of them have zero replies so no solution yet. That...
  4. C

    after daily backup fails Node is not usable

    zstd was a red herring, same happens at every compression level even at current latest proxmox 6.3.x
  5. C

    after daily backup fails Node is not usable

    After switching to a different backup server with completely different config/hw... and forced to smb3 only it still happens again. Backup for several LXC/VM containers already finished till it hangs INFO: Starting Backup of VM 148 (lxc) INFO: Backup started at 2020-11-04 01:19:54 INFO: status...
  6. C

    after daily backup fails Node is not usable

    backup failing today again, last entry at dmesg 10.24.12.34 is our synology backup nas [Oct20 01:16] CIFS VFS: \\10.24.12.34 Cancelling wait for mid 3907741 cmd: 5 [ +0.000008] CIFS VFS: \\10.24.12.34 Cancelling wait for mid 3907742 cmd: 16 [ +2.378669] CIFS VFS: \\10.24.12.34 Cancelling wait...
  7. C

    after daily backup fails Node is not usable

    Hi, we have a 4 node cluster with ceph configured and running for years. Recently starting with around 6.2-6 we got problems that our smb backups regular failing (ZSTD/Snaptshot). Same situation with 6.2-12. Likely its due a network failure, but why this happens no idea - nothing obvious were...
  8. C

    [SOLVED] No space left on device

    That problem effects every proxmox installation that uses smb/samba at the proxmox host. Open bugs for it https://bugzilla.proxmox.com/show_bug.cgi?id=2333 https://bugzilla.samba.org/show_bug.cgi?id=12435 One time clean up is not a solution, you need to clean it frequently. We had already the...
  9. C

    [SOLVED] LXC move disk to another container at Ceph

    tx for pointer just for the record, that worked for me rbd -p my_pool_name mv vm-100-disk-3 vm-200-disk-3
  10. C

    [SOLVED] LXC move disk to another container at Ceph

    never done such things before at ceph level, so forgive me stupid questions :) is that the way to go ? ceph fs mv old.disk new.disk
  11. C

    [SOLVED] LXC move disk to another container at Ceph

    Hello, I have a LXC container that includes several disks (mount points) from different ceph pools. Now we want to move a disk to another LXC container (disk includes just files from a smb share). So basically LXC (100) vm-100-disk-3 -> LXC (200) vm-200-disk-3. Backup Restore is not really a...