Search results

  1. DynFi User

    [SOLVED] Weird disk usage stats and data

    Yes : root@dc1-pbs01:~# du -shx / 1.2G /
  2. DynFi User

    [SOLVED] Weird disk usage stats and data

    Launched a scrub to figure out what might be the issue : 0 error. pool: rpool state: ONLINE scan: scrub repaired 0B in 00:01:54 with 0 errors on Wed Jun 23 10:32:42 2021 config: NAME STATE READ WRITE CKSUM rpool...
  3. DynFi User

    [SOLVED] Weird disk usage stats and data

    I am pushing my tests with PBS and am now encountering a weird situation. I'll try to describe briefly the situation. I have two pools in my system one for the system and one for the backups / data. root@dc1-pbs01:/mnt/datastore/backup/.chunks# zfs list NAME USED AVAIL...
  4. DynFi User

    Turning on ZFS compression on pool

    Just a little question : my PBS is configured using ZFS and compression has been left to default which is "on" and "local" which stands for "lz4". Shall this be left to the default "on" value ? Is there any interest in using compression with PBS (= isn't PBS using it's own compression - in...
  5. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    I already have this on the file : root=ZFS=rpool/ROOT/pbs-1 boot=zfs So I guess the new version should look like this : root=ZFS=rpool/ROOT/pbs-1 boot=zfs rootdelay=10 Can you please confirm this ? And the fact that I shall trigger : proxmox-boot-tool refresh once to generate the right files...
  6. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Here is the compressed file. Please use tar -xvzf to expand. thanks and sorry for the mistake.
  7. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Here is the output of journalctl -b
  8. DynFi User

    File level backup for 100.000.000 files

    We have a large volume that we need to backup which contains 100.000.000 files, with a ∆ / day of about 50.000 files (400GB). For the time being this file system is mounted directly in PBS using fuse kernel driver with mount -t ceph ip.srv.1,ip.srv.2,ip.srv.3,ipsrv.4:/ /mnt/mycephfs -o...
  9. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    We have first installed the system on two M.2 NVMe and after used the PBS GUI in order to configure the second pool with the 3.5" HDD. Is there a way to have a follow-up on this one ? Because everytime we are booting we have to manually mount the pool which is really not ok.
  10. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    It is 18000 line long… I think i created it through the GUI.
  11. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    When we have installed the system we have configured the root pool. But since we didn't had the 3.5" disks at hand, we had to wait until all disks were received to configure / setup the second pool (backup pool). After boot and successful install of all updates and disks, the "backup" pool with...
  12. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Any info about this one ? I have found this thread about the same issue with PVE, but Mount Point config is not handled the same in PVE and in Proxmox Backup. https://forum.proxmox.com/threads/zfs-pool-does-not-mount-at-boot-time.55732/
  13. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    We have a large PBS install with two pools : system pool with 2x NVMe (2x 256Go) backup pool with 13x HDD (13x 14To) The system pool is always well mounted. But unfortunately, not the backup pool ! Upon reboot the backup pool is always left unmounted. We have to manually mount it...
  14. DynFi User

    PBS to backup CephFS

    I was thinking of mounting the CephFS directly in PBS using kernel CephfS-Client. This will surely speed-up the backup. My tests have revealed a 350 MB/s which would be way better than the actual speed we have. What do you think of the idea ?
  15. DynFi User

    PBS to backup CephFS

    I am getting back to these threads : https://forum.proxmox.com/threads/cephfs-content-backed-up-in-proxmox-backup-server.84681/ https://forum.proxmox.com/threads/backup-ceph-fs.85040/ Because none of them have really been answered correctly from my point of view. And none of them has been...
  16. DynFi User

    Best way to access CephFS from within VM (high perf)

    @ph0x Thanks for your infos. I am fighting a bit with the FUSE access from VM into CEPHFS. The command referenced in the documentation didn't seem to make it through. I am having a hard time finding documentation on Proxmox for these FUSE mount. It seems like a taboo subject or a function...
  17. DynFi User

    Best way to access CephFS from within VM (high perf)

    Thanks for your feedback. This is very appreciated. I'll dig further in this direction and make some tests. I have almost finished the 'NFS-Ganesha' which is up and running. Nice thing about this is that It has no access to the Ceph Public Network from outside the Hypervisor (which is much...
  18. DynFi User

    Best way to access CephFS from within VM (high perf)

    We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD. We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data. Shared data will be very voluminous and with +100 millions of files. Beside...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!