Search results

  1. L

    [SOLVED] lxc uid mapping woes

    Thanks, it's sorted now.
  2. L

    [SOLVED] Cannot read remotely mounted backup datastore

    I have resolved the problem in a really simply way. Put another way: I was being really stupid. :oops: The lxc is already using ceph as storage, so if I simply increase the size of the lxc disk resource and use a directory as a datastore, I can provision enough storage for all my backups...
  3. L

    [SOLVED] lxc uid mapping woes

    I'm struggling to understand from the example here what the complete logic is behind the uid/gid mapping. I have a container (100). I want to map uid 1034 (which I created for this purpose) to uid 1034 inside the lxc. So, following the example in the above documentation, I added to...
  4. L

    [SOLVED] Cannot read remotely mounted backup datastore

    I have changed my approach somewhat now by mounting an actual directory of the host OS, which works fine, except that I now have to map the users properly, which I'll do next.
  5. L

    [SOLVED] Cannot read remotely mounted backup datastore

    On this point, let me ask this related question then. I have a proxmox cluster of older machines with about 20 spinning disks on 7 nodes which we use for development work & testing. Nothing to write home about, but stable and due the use of ceph as virtualised storage a dependable and stable...
  6. L

    [SOLVED] Cannot read remotely mounted backup datastore

    Oh, I see. Can we put a note in the documentation somewhere to that effect please? It is essentially on the same metal machine on which the lxc with pbs is running. It's still a test at this stage, though I hear you in terms of traffic. Thanks for the response.
  7. L

    [SOLVED] Cannot read remotely mounted backup datastore

    I have installed PVS 2.0 on a LXC to test an play around with. All is good, except the following: To have some decent storage to write backup to I have mounted a location with sftp as follows: sshfs backup@192.168.121.34:/mnt/pve/cephfs /mnt/cephfs (I have 'hijacked' the backup account...
  8. L

    Support External RBD And CephFS with Erasure Coded Data Pool

    It's been a while since this has been revisited. Is the proxmox team still adamant that Erasure Coded pools will not be supported? It's becoming a game changer out there, so why would a 2018 decision still hold? Ceph and Erasure Coding have signficant benefits for hosting: More efficient use...
  9. L

    Ceph Erasure Coding support

    Thanks for the response. We are planning major expansions and of course EC would be a better use of storage resources, since there's only about 40% overhead for full redundancy, while replication has at least a 200% overhead. It would really be good if this could be given some priority.
  10. L

    Ceph rbd pool or Erasure coding Pool

    Is this still the case? EC seems to be a better choice and is used widely.
  11. L

    Ceph Erasure Coding support

    It's been a while and I see no-one responded to this. I would also like to use ceph EC. Is there any update on this? Can set up ceph manually and just tell Proxmox to use it?
  12. L

    Error getting osd journal information

    I will try the ceph community for this, it's probably more relevant there...
  13. L

    Error getting osd journal information

    Proxmox 6.4 When I issue this command I get an error and no result: # ceph-osd --get-journal-fsid --osd-journal=/dev/sde -i 23 2021-09-29 22:14:58.319 7f59ee497c80 -1 asok(0x55eb542d8000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to...
  14. L

    Backup of some LXC's failing (Permission Denied)

    The problem is described here: https://unix.stackexchange.com/questions/563942/debian-change-owner-of-nobodynogroup The files listed as not accessible all belong to nobody:nogroup. Fix that and the backup will run fine. Or backup from the command line with vzdump and use --exclude-path to...
  15. L

    Backup of some LXC's failing (Permission Denied)

    More particularly, this seems to have to do with the LXC file system. I have: # ls -la /var/spool/ total 20 drwxr-xr-x 5 root root 4096 Jan 24 2019 . drwxr-xr-x 11 root root 4096 Jan 24 2019 .. drwxr-xr-x 3 root root 4096 Jan 24 2019 cron lrwxrwxrwx 1 root root 7...
  16. L

    Backup of some LXC's failing (Permission Denied)

    Indeed, how can it be that this happens to multiple people and no-one knows why? Surely someone at Proxmox knows? It happens with a particular container on my side as well.
  17. L

    SATA vs SAS for ceph

    Thanks for this. We are using the spindle drives for backups and important bulk storage more than for transactional processes. We also use NVMe SSDs for the high perfomance pool.
  18. L

    SATA vs SAS for ceph

    Hi all, In your practical experience, when I choose new hardware for a cluster, is there any noticable difference between using SATA or SAS drives. I know SAS drives can have a 12Gb/s interface and I think SATA can only do 6Gb/s, but in my experience the drives themselves can't write at 12Gb/s...
  19. L

    vzdump trouble again

    No, the problem is still there. It's the only container I can't backup, regardless of whether its running or not.