@janos: No raid card as CEPH is proposed only to be configured with HBA Controllers.
If you not using HP HBA (or RAID card in HBA mode), the LED indicator will not working, and the mentioned tools also can not detect HW issues.
iSCSI is a block storage, and for storing backup, you can use only file-level storage, NFS or CIFS/Samba for example: https://pve.proxmox.com/wiki/Storage#_storage_types
Hi,
We have vmware cluster, and this technology (at least with vmware) working well over 10GBit or 1 GBit (optical) network. Yes, offcourse, 40gbit network, or 100gbit will be better, but better equipment cause only better performance, its not a requirements.
Or as an alternatives you can save CEPH rbd images directly from ceph cluster to your backup storage. You can find many scripts for this orpuse, for example: https://github.com/magusnebula/ceph_backup_script
Not the 10GBit matter, the media matter.
10 or 1 GBit with SFP over optical cable have much lesser latency than copper based link even if it that a 10gbit link. (not because of the media, the difference is beetwen how SFP and normal RJ45 interface working, SFP DAC cable also better than regular...
Hi,
You can speed up recovery, but your normal IO will be slow after that. IF you want faster recover, increase the number of recovery processes:
ceph tell 'osd.*' injectargs '--osd-max-backfills 16'
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
Now i updated one node to the latest version, i tried to migrate from older node to the upgraded, and i got the same error. But i not see the reason, or any solution or even a workaround.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.