Hi,
First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use raid-0 indeed, but that 's not the point.
I just built a 3 nodes cluster v5.4 (via Virtualbox and nested virtualization thanks AMD) in order to see if switching from 5.4 to 6 and then from Luminous to Nautilus should be OK (with almost what we usually use in production namely ceph indeed, NFS, ZFS pools, etc.). Result is OK
Now with v6, in Gui OSD tab, it's explicitly mentioned to avoid raid controllers , I'm wondering if there is something new or different related to how nautilus is now holding OSDs or if I can continue using my raid 0 OSDs after migration.
Thanks in advanced !
Antoine
First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use raid-0 indeed, but that 's not the point.
I just built a 3 nodes cluster v5.4 (via Virtualbox and nested virtualization thanks AMD) in order to see if switching from 5.4 to 6 and then from Luminous to Nautilus should be OK (with almost what we usually use in production namely ceph indeed, NFS, ZFS pools, etc.). Result is OK
Now with v6, in Gui OSD tab, it's explicitly mentioned to avoid raid controllers , I'm wondering if there is something new or different related to how nautilus is now holding OSDs or if I can continue using my raid 0 OSDs after migration.
Thanks in advanced !
Antoine