Search results

  1. L

    [SOLVED] Cannot read remotely mounted backup datastore

    Oh, I see. Can we put a note in the documentation somewhere to that effect please? It is essentially on the same metal machine on which the lxc with pbs is running. It's still a test at this stage, though I hear you in terms of traffic. Thanks for the response.
  2. L

    [SOLVED] Cannot read remotely mounted backup datastore

    I have installed PVS 2.0 on a LXC to test an play around with. All is good, except the following: To have some decent storage to write backup to I have mounted a location with sftp as follows: sshfs backup@192.168.121.34:/mnt/pve/cephfs /mnt/cephfs (I have 'hijacked' the backup account...
  3. L

    Support External RBD And CephFS with Erasure Coded Data Pool

    It's been a while since this has been revisited. Is the proxmox team still adamant that Erasure Coded pools will not be supported? It's becoming a game changer out there, so why would a 2018 decision still hold? Ceph and Erasure Coding have signficant benefits for hosting: More efficient use...
  4. L

    Ceph Erasure Coding support

    Thanks for the response. We are planning major expansions and of course EC would be a better use of storage resources, since there's only about 40% overhead for full redundancy, while replication has at least a 200% overhead. It would really be good if this could be given some priority.
  5. L

    Ceph rbd pool or Erasure coding Pool

    Is this still the case? EC seems to be a better choice and is used widely.
  6. L

    Ceph Erasure Coding support

    It's been a while and I see no-one responded to this. I would also like to use ceph EC. Is there any update on this? Can set up ceph manually and just tell Proxmox to use it?
  7. L

    Error getting osd journal information

    I will try the ceph community for this, it's probably more relevant there...
  8. L

    Error getting osd journal information

    Proxmox 6.4 When I issue this command I get an error and no result: # ceph-osd --get-journal-fsid --osd-journal=/dev/sde -i 23 2021-09-29 22:14:58.319 7f59ee497c80 -1 asok(0x55eb542d8000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to...
  9. L

    Backup of some LXC's failing (Permission Denied)

    The problem is described here: https://unix.stackexchange.com/questions/563942/debian-change-owner-of-nobodynogroup The files listed as not accessible all belong to nobody:nogroup. Fix that and the backup will run fine. Or backup from the command line with vzdump and use --exclude-path to...
  10. L

    Backup of some LXC's failing (Permission Denied)

    More particularly, this seems to have to do with the LXC file system. I have: # ls -la /var/spool/ total 20 drwxr-xr-x 5 root root 4096 Jan 24 2019 . drwxr-xr-x 11 root root 4096 Jan 24 2019 .. drwxr-xr-x 3 root root 4096 Jan 24 2019 cron lrwxrwxrwx 1 root root 7...
  11. L

    Backup of some LXC's failing (Permission Denied)

    Indeed, how can it be that this happens to multiple people and no-one knows why? Surely someone at Proxmox knows? It happens with a particular container on my side as well.
  12. L

    SATA vs SAS for ceph

    Thanks for this. We are using the spindle drives for backups and important bulk storage more than for transactional processes. We also use NVMe SSDs for the high perfomance pool.
  13. L

    SATA vs SAS for ceph

    Hi all, In your practical experience, when I choose new hardware for a cluster, is there any noticable difference between using SATA or SAS drives. I know SAS drives can have a 12Gb/s interface and I think SATA can only do 6Gb/s, but in my experience the drives themselves can't write at 12Gb/s...
  14. L

    vzdump trouble again

    No, the problem is still there. It's the only container I can't backup, regardless of whether its running or not.
  15. L

    vzdump trouble again

    I have one lxc that I cannot back up. Here's the log: # cat /mnt/pve/cephfs/dump/vzdump-lxc-148-2021_07_25-12_32_35.log 2021-07-25 12:32:35 INFO: Starting Backup of VM 148 (lxc) 2021-07-25 12:32:35 INFO: status = stopped 2021-07-25 12:32:35 INFO: backup mode: stop 2021-07-25 12:32:35 INFO...
  16. L

    [SOLVED] LXC backup has started failing, was working before

    Finally! Mistery solved! The lxc was running a privileged (by mistake) and mounting an nfs mountpoint from it causes the backup to stall without any error message. I converted the container to run unprivileged and now the backup runs. I figured this out by step by step comparing a container...
  17. L

    [SOLVED] LXC backup has started failing, was working before

    Update: When I unmount the nfs mount at /mnt/backup then process completed properly. I have other machine with the exact same mount that backup just fine. I'm doing some more tests.
  18. L

    [SOLVED] LXC backup has started failing, was working before

    When machine 105 is off, I can make a backup successfully. After all, it's basically a copy of the machine storage volume. However, when the machine is running, this is what happens: INFO: starting new backup job: vzdump 105 --remove 0 --compress zstd --storage cephfs --mode snapshot --node...
  19. L

    [SOLVED] LXC backup has started failing, was working before

    INFO: starting new backup job: vzdump 133 --node FT1-NodeD --storage cephfs --mode snapshot --remove 0 --compress zstd INFO: filesystem type on dumpdir is 'ceph' -using /var/tmp/vzdumptmp1546713_133 for temporary files INFO: Starting Backup of VM 133 (lxc) INFO: Backup started at 2021-05-07...
  20. L

    [SOLVED] LXC backup has started failing, was working before

    Some more digging done... In /mnt/pve/cephfs/dump I find the backup -rw-r--r-- 1 root root 3.4G May 7 18:38 /mnt/pve/cephfs/dump/vzdump-lxc-133-2021_05_07-18_22_24.tar.dat and it's actually being written, but no progress indicator or any other indication like there normally is when a backup...