Search results

  1. DynFi User

    [SOLVED] Weird disk usage stats and data

    I have done what you have suggested and I have found the data ! You were right. In this thread I have described my problem with ZFS not mounting my second pool (backup). So the initial datastore that I have created have been setup on the rpool and not on backup. Thanks to your command the...
  2. DynFi User

    [SOLVED] Weird disk usage stats and data

    Here it is : root@dc1-pbs01:~# zfs get all rpool/ROOT/pbs-1 NAME PROPERTY VALUE SOURCE rpool/ROOT/pbs-1 type filesystem - rpool/ROOT/pbs-1 creation Tue Jan 28 16:04 2020 - rpool/ROOT/pbs-1 used...
  3. DynFi User

    [SOLVED] Weird disk usage stats and data

    rpool history is also very clean : History for 'rpool': 2020-01-28.16:04:55 zpool create -f -o cachefile=none -o ashift=12 rpool mirror /dev/disk/by-id/nvme-eui.00000000000000018ce38e010009f4e7-part3 /dev/disk/by-id/nvme-eui.00000000000000018ce38e010009f4e6-part3 2020-01-28.16:04:55 zfs create...
  4. DynFi User

    [SOLVED] Weird disk usage stats and data

    Yes : root@dc1-pbs01:~# du -shx / 1.2G /
  5. DynFi User

    [SOLVED] Weird disk usage stats and data

    Launched a scrub to figure out what might be the issue : 0 error. pool: rpool state: ONLINE scan: scrub repaired 0B in 00:01:54 with 0 errors on Wed Jun 23 10:32:42 2021 config: NAME STATE READ WRITE CKSUM rpool...
  6. DynFi User

    [SOLVED] Weird disk usage stats and data

    I am pushing my tests with PBS and am now encountering a weird situation. I'll try to describe briefly the situation. I have two pools in my system one for the system and one for the backups / data. root@dc1-pbs01:/mnt/datastore/backup/.chunks# zfs list NAME USED AVAIL...
  7. DynFi User

    Turning on ZFS compression on pool

    Just a little question : my PBS is configured using ZFS and compression has been left to default which is "on" and "local" which stands for "lz4". Shall this be left to the default "on" value ? Is there any interest in using compression with PBS (= isn't PBS using it's own compression - in...
  8. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    I already have this on the file : root=ZFS=rpool/ROOT/pbs-1 boot=zfs So I guess the new version should look like this : root=ZFS=rpool/ROOT/pbs-1 boot=zfs rootdelay=10 Can you please confirm this ? And the fact that I shall trigger : proxmox-boot-tool refresh once to generate the right files...
  9. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Here is the compressed file. Please use tar -xvzf to expand. thanks and sorry for the mistake.
  10. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Here is the output of journalctl -b
  11. DynFi User

    File level backup for 100.000.000 files

    We have a large volume that we need to backup which contains 100.000.000 files, with a ∆ / day of about 50.000 files (400GB). For the time being this file system is mounted directly in PBS using fuse kernel driver with mount -t ceph ip.srv.1,ip.srv.2,ip.srv.3,ipsrv.4:/ /mnt/mycephfs -o...
  12. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    We have first installed the system on two M.2 NVMe and after used the PBS GUI in order to configure the second pool with the 3.5" HDD. Is there a way to have a follow-up on this one ? Because everytime we are booting we have to manually mount the pool which is really not ok.
  13. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    It is 18000 line long… I think i created it through the GUI.
  14. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    When we have installed the system we have configured the root pool. But since we didn't had the 3.5" disks at hand, we had to wait until all disks were received to configure / setup the second pool (backup pool). After boot and successful install of all updates and disks, the "backup" pool with...
  15. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    Any info about this one ? I have found this thread about the same issue with PVE, but Mount Point config is not handled the same in PVE and in Proxmox Backup. https://forum.proxmox.com/threads/zfs-pool-does-not-mount-at-boot-time.55732/
  16. DynFi User

    [SOLVED] ZFS pool not mounted at startup

    We have a large PBS install with two pools : system pool with 2x NVMe (2x 256Go) backup pool with 13x HDD (13x 14To) The system pool is always well mounted. But unfortunately, not the backup pool ! Upon reboot the backup pool is always left unmounted. We have to manually mount it...
  17. DynFi User

    PBS to backup CephFS

    I was thinking of mounting the CephFS directly in PBS using kernel CephfS-Client. This will surely speed-up the backup. My tests have revealed a 350 MB/s which would be way better than the actual speed we have. What do you think of the idea ?
  18. DynFi User

    PBS to backup CephFS

    I am getting back to these threads : https://forum.proxmox.com/threads/cephfs-content-backed-up-in-proxmox-backup-server.84681/ https://forum.proxmox.com/threads/backup-ceph-fs.85040/ Because none of them have really been answered correctly from my point of view. And none of them has been...
  19. DynFi User

    Best way to access CephFS from within VM (high perf)

    @ph0x Thanks for your infos. I am fighting a bit with the FUSE access from VM into CEPHFS. The command referenced in the documentation didn't seem to make it through. I am having a hard time finding documentation on Proxmox for these FUSE mount. It seems like a taboo subject or a function...