we just moved our biggest Windows fileserver from VMware to proxmox ..
Before moving 20+TB we discovered not to use REFS as this would give problems to restore files from it.
So we build an NTFS volume for it with clustersize 16KB to exceed the standard 16TB volume limit on 4KB size.
Many postings about this on the google result-pages but no clear solution.
It happens on most agents. Don't know if it happens after virtual migration(s).
systemctl restart qemu-guest-agent fixes it and now i'm thinking to get this scheduled on all our vm's if "dead".
What could be the...
Just powered on one of standby machines to verify the controller.
I've been looking earlier for this feature.
Now the onboard p440ar acts as an HBA and all devices are passed through (like an old scsi controller would have) to the OS.
/dev/sda, /dev/sdb etc without any further creation of...
No NFS, iSCSI... alright... i'm still old-fashion i guess...
By native RBD you mean create RBD volumes using cephadm-cli and mount them through a client on various k8s VM's?
In that case i can also create RGW-s3 configuration manually on 2 physical hosts? Or you think it would be wise to make...
So if i get it right, using ceph-dashboard, how tempting it would seem, has the ability to break the existing ceph-environment?
So let's say, i want to use the same hardware for storing S3 data through RGW, or using NFS-ganesha, i better do this manually?
Does someone have a few pointers about this setup?
I want to install a ha object store and eventually some NFS volumes, using our ceph-installation, to use for our internal k8s cluster.
Is it better to get this virtualized to avoid upgrade issues on the physical hosts?
Hmm... actually after a resilver (with the same disk on the same port) the Rz2 does seem to function for a couple of weeks.
Like now, it works again (flawless) for a week but it will turn up again i'm sure.
I'm running PBS 2.2-5 ...
root@pbsu01:~# cat /sys/module/zfs/version