Launched a scrub to figure out what might be the issue : 0 error.
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:01:54 with 0 errors on Wed Jun 23 10:32:42 2021
config:
NAME STATE READ WRITE CKSUM
rpool...
I am pushing my tests with PBS and am now encountering a weird situation.
I'll try to describe briefly the situation.
I have two pools in my system one for the system and one for the backups / data.
root@dc1-pbs01:/mnt/datastore/backup/.chunks# zfs list
NAME USED AVAIL...
Just a little question : my PBS is configured using ZFS and compression has been left to default which is "on" and "local" which stands for "lz4".
Shall this be left to the default "on" value ?
Is there any interest in using compression with PBS (= isn't PBS using it's own compression - in...
I already have this on the file :
root=ZFS=rpool/ROOT/pbs-1 boot=zfs
So I guess the new version should look like this :
root=ZFS=rpool/ROOT/pbs-1 boot=zfs rootdelay=10
Can you please confirm this ?
And the fact that I shall trigger : proxmox-boot-tool refresh once to generate the right files...
We have a large volume that we need to backup which contains 100.000.000 files, with a ∆ / day of about 50.000 files (400GB).
For the time being this file system is mounted directly in PBS using fuse kernel driver with mount -t ceph ip.srv.1,ip.srv.2,ip.srv.3,ipsrv.4:/ /mnt/mycephfs -o...
We have first installed the system on two M.2 NVMe and after used the PBS GUI in order to configure the second pool with the 3.5" HDD.
Is there a way to have a follow-up on this one ?
Because everytime we are booting we have to manually mount the pool which is really not ok.
When we have installed the system we have configured the root pool. But since we didn't had the 3.5" disks at hand, we had to wait until all disks were received to configure / setup the second pool (backup pool).
After boot and successful install of all updates and disks, the "backup" pool with...
Any info about this one ?
I have found this thread about the same issue with PVE, but Mount Point config is not handled the same in PVE and in Proxmox Backup.
https://forum.proxmox.com/threads/zfs-pool-does-not-mount-at-boot-time.55732/
We have a large PBS install with two pools :
system pool with 2x NVMe (2x 256Go)
backup pool with 13x HDD (13x 14To)
The system pool is always well mounted.
But unfortunately, not the backup pool !
Upon reboot the backup pool is always left unmounted.
We have to manually mount it...
I was thinking of mounting the CephFS directly in PBS using kernel CephfS-Client.
This will surely speed-up the backup. My tests have revealed a 350 MB/s which would be way better than the actual speed we have.
What do you think of the idea ?
I am getting back to these threads :
https://forum.proxmox.com/threads/cephfs-content-backed-up-in-proxmox-backup-server.84681/
https://forum.proxmox.com/threads/backup-ceph-fs.85040/
Because none of them have really been answered correctly from my point of view.
And none of them has been...
@ph0x Thanks for your infos.
I am fighting a bit with the FUSE access from VM into CEPHFS.
The command referenced in the documentation didn't seem to make it through.
I am having a hard time finding documentation on Proxmox for these FUSE mount.
It seems like a taboo subject or a function...
Thanks for your feedback. This is very appreciated.
I'll dig further in this direction and make some tests.
I have almost finished the 'NFS-Ganesha' which is up and running.
Nice thing about this is that It has no access to the Ceph Public Network from outside the Hypervisor (which is much...
We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD.
We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data.
Shared data will be very voluminous and with +100 millions of files.
Beside...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.