Ceph osd disconnecting

vitor costa

Active Member
Oct 28, 2009
142
2
38
My Ceph cluster 5.2 show very frequent erros on logs.
I Upgrade to 5.3 and same happens:

Logs show thins like it:

2018-12-22 06:55:01.142920 osd.1 osd.1 192.168.0.200:6804/2869 427 : cluster [ERR] 1.52 shard 1: soid 1:4a2819a2:::rbd_data.55156b8b4567.0000000000015ac0:head candidate had a read error
2018-12-22 06:56:34.470814 osd.1 osd.1 192.168.0.200:6804/2869 428 : cluster [ERR] 1.52 deep-scrub 0 missing, 1 inconsistent objects
2018-12-22 06:56:34.470817 osd.1 osd.1 192.168.0.200:6804/2869 429 : cluster [ERR] 1.52 deep-scrub 1 errors
2018-12-22 06:56:38.394385 mon.pm1-leader mon.0 192.168.0.200:6789/0 47408 : cluster [ERR] Health check failed: 1 scrub errors (OSD_SCRUB_ERRORS)
2018-12-22 06:56:38.394420 mon.pm1-leader mon.0 192.168.0.200:6789/0 47409 : cluster [ERR] Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2018-12-22 07:00:00.000107 mon.pm1-leader mon.0 192.168.0.200:6789/0 47471 : cluster [ERR] overall HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
2018-12-22 07:00:05.476409 mon.pm1-leader mon.0 192.168.0.200:6789/0 47474 : cluster [ERR] Health check update: Possible data damage: 1 pg inconsistent, 1 pg repair (PG_DAMAGED)
2018-12-22 07:01:33.306234 mon.pm1-leader mon.0 192.168.0.200:6789/0 47504 : cluster [INF] Health check cleared: OSD_SCRUB_ERRORS (was: 1 scrub errors)
2018-12-22 07:01:33.306270 mon.pm1-leader mon.0 192.168.0.200:6789/0 47505 : cluster [INF] Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent, 1 pg repair)
2018-12-22 07:01:33.306279 mon.pm1-leader mon.0 192.168.0.200:6789/0 47506 : cluster [INF] Cluster is now healthy
2018-12-22 08:00:00.000115 mon.pm1-leader mon.0 192.168.0.200:6789/0 48617 : cluster [INF] overall HEALTH_OK
2018-12-22 09:00:00.000119 mon.pm1-leader mon.0 192.168.0.200:6789/0 49735 : cluster [INF] overall HEALTH_OK
2018-12-22 10:00:00.000085 mon.pm1-leader mon.0 192.168.0.200:6789/0 50855 : cluster [INF] overall HEALTH_OK
2018-12-22 10:50:31.580844 mon.pm1-leader mon.0 192.168.0.200:6789/0 51734 : cluster [INF] osd.2 marked itself down
2018-12-22 10:50:31.594609 mon.pm1-leader mon.0 192.168.0.200:6789/0 51735 : cluster [INF] osd.3 marked itself down
2018-12-22 10:50:32.408743 mon.pm1-leader mon.0 192.168.0.200:6789/0 51736 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-12-22 10:50:32.415037 mon.pm1-leader mon.0 192.168.0.200:6789/0 51737 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-12-22 10:51:04.098090 mon.pm1-leader mon.0 192.168.0.200:6789/0 51744 : cluster [INF] Manager daemon pm2-leader is unresponsive, replacing it with standby daemon pm1-leader
2018-12-22 10:51:06.279761 mon.pm1-leader mon.0 192.168.0.200:6789/0 51749 : cluster [INF] Manager daemon pm1-leader is now available
2018-12-22 10:51:07.901994 mon.pm1-leader mon.0 192.168.0.200:6789/0 51751 : cluster [WRN] Health check failed: Degraded data redundancy: 188467/376934 objects degraded (50.000%), 192 pgs degraded (PG_DEGRADED)
2018-12-22 10:51:35.451328 mon.pm1-leader mon.0 192.168.0.200:6789/0 51755 : cluster [WRN] Health check update: Degraded data redundancy: 188467/376934 objects degraded (50.000%), 192 pgs degraded, 148 pgs undersized (PG_DEGRADED)
2018-12-22 10:51:44.160460 mon.pm1-leader mon.0 192.168.0.200:6789/0 51757 : cluster [WRN] Health check update: Degraded data redundancy: 188467/376934 objects degraded (50.000%), 192 pgs degraded, 192 pgs undersized (PG_DEGRADED)
2018-12-22 10:55:00.260740 mon.pm2-leader mon.1 192.168.0.201:6789/0 1 : cluster [INF] mon.pm2-leader calling monitor election
2018-12-22 10:55:00.761941 mon.pm1-leader mon.0 192.168.0.200:6789/0 51760 : cluster [INF] mon.pm1-leader calling monitor election
2018-12-22 10:55:00.828629 mon.pm1-leader mon.0 192.168.0.200:6789/0 51761 : cluster [INF] mon.pm1-leader is new leader, mons pm1-leader,pm2-leader in quorum (ranks 0,1)
2018-12-22 10:55:00.977828 mon.pm2-leader mon.1 192.168.0.201:6789/0 2 : cluster [WRN] message from mon.0 was stamped 0.058040s in the future, clocks not synchronized
2018-12-22 10:55:01.036188 mon.pm1-leader mon.0 192.168.0.200:6789/0 51766 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 188467/376934 objects degraded (50.000%), 192 pgs degraded, 192 pgs undersized
2018-12-22 10:55:06.274157 mon.pm2-leader mon.1 192.168.0.201:6789/0 3 : cluster [WRN] message from mon.0 was stamped 0.055061s in the future, clocks not synchronized
2018-12-22 10:55:11.191735 mon.pm1-leader mon.0 192.168.0.200:6789/0 51774 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-12-22 10:55:11.191766 mon.pm1-leader mon.0 192.168.0.200:6789/0 51775 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-12-22 10:55:11.330820 mon.pm1-leader mon.0 192.168.0.200:6789/0 51776 : cluster [INF] osd.2 192.168.0.201:6804/3344 boot
2018-12-22 10:55:12.500533 mon.pm1-leader mon.0 192.168.0.200:6789/0 51779 : cluster [WRN] Health check failed: Reduced data availability: 17 pgs peering (PG_AVAILABILITY)
2018-12-22 10:55:12.500566 mon.pm1-leader mon.0 192.168.0.200:6789/0 51780 : cluster [WRN] Health check update: Degraded data redundancy: 170903/376934 objects degraded (45.340%), 175 pgs degraded, 175 pgs undersized (PG_DEGRADED)
2018-12-22 10:55:18.354301 mon.pm1-leader mon.0 192.168.0.200:6789/0 51782 : cluster [WRN] Health check update: Degraded data redundancy: 148132/376934 objects degraded (39.299%), 155 pgs degraded, 151 pgs undersized (PG_DEGRADED)
2018-12-22 10:55:18.354334 mon.pm1-leader mon.0 192.168.0.200:6789/0 51783 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 17 pgs peering)
2018-12-22 10:55:28.455628 mon.pm1-leader mon.0 192.168.0.200:6789/0 51786 : cluster [WRN] Health check update: Degraded data redundancy: 148127/376934 objects degraded (39.298%), 152 pgs degraded, 151 pgs undersized (PG_DEGRADED)
2018-12-22 10:55:30.773574 mon.pm1-leader mon.0 192.168.0.200:6789/0 51789 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-12-22 10:55:30.981764 mon.pm1-leader mon.0 192.168.0.200:6789/0 51790 : cluster [INF] osd.3 192.168.0.201:6800/3063 boot
2018-12-22 10:55:31.879698 mon.pm2-leader mon.1 192.168.0.201:6789/0 9 : cluster [WRN] message from mon.0 was stamped 0.060782s in the future, clocks not synchronized
2018-12-22 10:55:34.167898 mon.pm1-leader mon.0 192.168.0.200:6789/0 51795 : cluster [WRN] Health check update: Degraded data redundancy: 67197/376934 objects degraded (17.827%), 69 pgs degraded, 69 pgs undersized (PG_DEGRADED)
2018-12-22 10:55:36.604743 mon.pm1-leader mon.0 192.168.0.200:6789/0 51796 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 67197/376934 objects degraded (17.827%), 69 pgs degraded, 69 pgs undersized)
2018-12-22 10:55:36.604781 mon.pm1-leader mon.0 192.168.0.200:6789/0 51797 : cluster [INF] Cluster is now healthy
2018-12-22 10:55:38.624808 mon.pm1-leader mon.0 192.168.0.200:6789/0 51798 : cluster [WRN] Health check failed: Degraded data redundancy: 9/376934 objects degraded (0.002%), 8 pgs degraded (PG_DEGRADED)
2018-12-22 10:55:43.335234 mon.pm1-leader mon.0 192.168.0.200:6789/0 51800 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 9/376934 objects degraded (0.002%), 8 pgs degraded)
2018-12-22 10:55:43.335254 mon.pm1-leader mon.0 192.168.0.200:6789/0 51801 : cluster [INF] Cluster is now healthy
2018-12-22 11:00:00.000108 mon.pm1-leader mon.0 192.168.0.200:6789/0 51825 : cluster [INF] overall HEALTH_OK
2018-12-22 11:10:36.355235 mon.pm1-leader mon.0 192.168.0.200:6789/0 51889 : cluster [INF] osd.0 marked itself down
2018-12-22 11:10:36.356578 mon.pm1-leader mon.0 192.168.0.200:6789/0 51890 : cluster [INF] osd.1 marked itself down
2018-12-22 11:10:36.597274 mon.pm1-leader mon.0 192.168.0.200:6789/0 51891 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)
2018-12-22 11:10:36.597296 mon.pm1-leader mon.0 192.168.0.200:6789/0 51892 : cluster [WRN] Health check failed: 1 host (2 osds) down (OSD_HOST_DOWN)
2018-12-22 11:11:06.333719 mon.pm1-leader mon.0 192.168.0.200:6789/0 51895 : cluster [INF] Manager daemon pm1-leader is unresponsive, replacing it with standby daemon pm2-leader
2018-12-22 11:11:06.690209 mon.pm1-leader mon.0 192.168.0.200:6789/0 51901 : cluster [INF] Manager daemon pm2-leader is now available
2018-12-22 11:11:08.795402 mon.pm1-leader mon.0 192.168.0.200:6789/0 51903 : cluster [WRN] Health check failed: Degraded data redundancy: 188468/376936 objects degraded (50.000%), 192 pgs degraded (PG_DEGRADED)
2018-12-22 11:11:38.717293 mon.pm1-leader mon.0 192.168.0.200:6789/0 51904 : cluster [WRN] Health check update: Degraded data redundancy: 188468/376936 objects degraded (50.000%), 192 pgs degraded, 192 pgs undersized (PG_DEGRADED)
2018-12-22 11:14:16.610151 mon.pm1-leader mon.0 192.168.0.200:6789/0 1 : cluster [INF] mon.pm1-leader calling monitor election
2018-12-22 11:14:21.874912 mon.pm1-leader mon.0 192.168.0.200:6789/0 2 : cluster [INF] mon.pm1-leader is new leader, mons pm1-leader,pm2-leader in quorum (ranks 0,1)
2018-12-22 11:14:17.112863 mon.pm2-leader mon.1 192.168.0.201:6789/0 106 : cluster [INF] mon.pm2-leader calling monitor election
2018-12-22 11:14:21.919424 mon.pm1-leader mon.0 192.168.0.200:6789/0 7 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; Degraded data redundancy: 188468/376936 objects degraded (50.000%), 192 pgs degraded, 192 pgs undersized
2018-12-22 11:14:21.981933 mon.pm1-leader mon.0 192.168.0.200:6789/0 8 : cluster [WRN] mon.1 192.168.0.201:6789/0 clock skew 0.16522s > max 0.05s
2018-12-22 11:14:21.982009 mon.pm1-leader mon.0 192.168.0.200:6789/0 9 : cluster [WRN] message from mon.1 was stamped 0.247833s in the future, clocks not synchronized
2018-12-22 11:14:26.664989 mon.pm1-leader mon.0 192.168.0.200:6789/0 15 : cluster [INF] Manager daemon pm2-leader is unresponsive. No standby daemons available.
2018-12-22 11:14:26.665047 mon.pm1-leader mon.0 192.168.0.200:6789/0 16 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)
2018-12-22 11:14:26.665168 mon.pm1-leader mon.0 192.168.0.200:6789/0 17 : cluster [WRN] Health check failed: clock skew detected on mon.pm2-leader (MON_CLOCK_SKEW)
2018-12-22 11:14:27.073264 mon.pm1-leader mon.0 192.168.0.200:6789/0 19 : cluster [WRN] message from mon.1 was stamped 0.310403s in the future, clocks not synchronized
2018-12-22 11:14:28.077544 mon.pm1-leader mon.0 192.168.0.200:6789/0 22 : cluster [WRN] Health check update: 1 osds down (OSD_DOWN)
2018-12-22 11:14:28.077582 mon.pm1-leader mon.0 192.168.0.200:6789/0 23 : cluster [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (2 osds) down)
2018-12-22 11:14:28.219188 mon.pm1-leader mon.0 192.168.0.200:6789/0 24 : cluster [INF] Activating manager daemon pm1-leader
2018-12-22 11:14:28.277414 mon.pm1-leader mon.0 192.168.0.200:6789/0 25 : cluster [INF] osd.0 192.168.0.200:6800/2995 boot
2018-12-22 11:14:29.177449 mon.pm1-leader mon.0 192.168.0.200:6789/0 28 : cluster [INF] Health check cleared: MGR_DOWN (was: no active mgr)
2018-12-22 11:14:29.383001 mon.pm1-leader mon.0 192.168.0.200:6789/0 34 : cluster [INF] Manager daemon pm1-leader is now available
2018-12-22 11:14:31.453229 mon.pm1-leader mon.0 192.168.0.200:6789/0 38 : cluster [WRN] Health check update: Degraded data redundancy: 141865/376936 objects degraded (37.636%), 175 pgs degraded, 148 pgs undersized (PG_DEGRADED)
2018-12-22 11:14:36.554639 mon.pm1-leader mon.0 192.168.0.200:6789/0 41 : cluster [WRN] Health check update: Degraded data redundancy: 141853/376936 objects degraded (37.633%), 166 pgs degraded, 148 pgs undersized (PG_DEGRADED)
2018-12-22 11:14:43.165163 mon.pm1-leader mon.0 192.168.0.200:6789/0 45 : cluster [WRN] Health check update: Degraded data redundancy: 141843/376936 objects degraded (37.631%), 161 pgs degraded, 148 pgs undersized (PG_DEGRADED)
2018-12-22 11:14:46.708835 mon.pm1-leader mon.0 192.168.0.200:6789/0 51 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-12-22 11:14:46.866861 mon.pm1-leader mon.0 192.168.0.200:6789/0 52 : cluster [INF] osd.1 192.168.0.200:6804/3112 boot
2018-12-22 11:14:51.666014 mon.pm1-leader mon.0 192.168.0.200:6789/0 56 : cluster [WRN] Health check update: Degraded data redundancy: 141833/376936 objects degraded (37.628%), 155 pgs degraded, 148 pgs undersized (PG_DEGRADED)
2018-12-22 11:14:51.982674 mon.pm1-leader mon.0 192.168.0.200:6789/0 57 : cluster [WRN] mon.1 192.168.0.201:6789/0 clock skew 0.309674s > max 0.05s
2018-12-22 11:14:52.945950 mon.pm1-leader mon.0 192.168.0.200:6789/0 58 : cluster [WRN] message from mon.1 was stamped 0.310236s in the future, clocks not synchronized
2018-12-22 11:14:56.666211 mon.pm1-leader mon.0 192.168.0.200:6789/0 61 : cluster [WRN] Health check update: Degraded data redundancy: 137/376936 objects degraded (0.036%), 51 pgs degraded (PG_DEGRADED)
2018-12-22 11:15:01.682818 mon.pm1-leader mon.0 192.168.0.200:6789/0 62 : cluster [WRN] Health check update: Degraded data redundancy: 84/376936 objects degraded (0.022%), 58 pgs degraded (PG_DEGRADED)
2018-12-22 11:15:06.683056 mon.pm1-leader mon.0 192.168.0.200:6789/0 65 : cluster [WRN] Health check update: Degraded data redundancy: 70/376936 objects degraded (0.019%), 50 pgs degraded (PG_DEGRADED)
2018-12-22 11:15:11.683267 mon.pm1-leader mon.0 192.168.0.200:6789/0 69 : cluster [WRN] Health check update: Degraded data redundancy: 31/376936 objects degraded (0.008%), 20 pgs degraded (PG_DEGRADED)
2018-12-22 11:15:16.683495 mon.pm1-leader mon.0 192.168.0.200:6789/0 73 : cluster [WRN] Health check update: Degraded data redundancy: 19/376936 objects degraded (0.005%), 12 pgs degraded (PG_DEGRADED)
2018-12-22 11:15:20.749728 mon.pm1-leader mon.0 192.168.0.200:6789/0 75 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/376936 objects degraded (0.000%), 2 pgs degraded)
2018-12-22 11:15:56.685046 mon.pm1-leader mon.0 192.168.0.200:6789/0 88 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pm2-leader)
2018-12-22 11:15:56.685086 mon.pm1-leader mon.0 192.168.0.200:6789/0 89 : cluster [INF] Cluster is now healthy
2018-12-22 12:00:00.000124 mon.pm1-leader mon.0 192.168.0.200:6789/0 359 : cluster [INF] overall HEALTH_OK
2018-12-22 13:00:00.000122 mon.pm1-leader mon.0 192.168.0.200:6789/0 726 : cluster [INF] overall HEALTH_OK
2018-12-22 14:00:00.000120 mon.pm1-leader mon.0 192.168.0.200:6789/0 1091 : cluster [INF] overall HEALTH_OK
2018-12-22 15:00:00.000091 mon.pm1-leader mon.0 192.168.0.200:6789/0 1467 : cluster [INF] overall HEALTH_OK
2018-12-22 16:00:00.000121 mon.pm1-leader mon.0 192.168.0.200:6789/0 1840 : cluster [INF] overall HEALTH_OK
2018-12-22 17:00:00.000103 mon.pm1-leader mon.0 192.168.0.200:6789/0 2203 : cluster [INF] overall HEALTH_OK
2018-12-22 18:00:00.000116 mon.pm1-leader mon.0 192.168.0.200:6789/0 2571 : cluster [INF] overall HEALTH_OK
 
Hey,

could you give us more information? You only post some log entries, but no information about your Hardware, Software Versions, Configs etc.
 
Are 2 Dell R230 Servers w R430 hard controllers, raid 1 sata-disks disks

Pvevesion -v show:

proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
pve-manager: 5.3-6 (running version: 5.3-6/37b3c8df)
pve-kernel-4.15: 5.2-12
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.10-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-34
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1

Cluster comunications using onboard Broadcom Limited NetXtreme BCM5720 Gigabit Ethernet PCIe cards

Switch is a TPLINK SmartSwitch 16 ports 1 gb (we replaced him some days ago becouse believe problem cam be the old 1 gb switch, but seens its not the case)
 
I think you have many problems here. First it seems you using an RAID controller for your OSDs, second you use a RAID 1 Volume as OSD (as far as i can see it), third you only have two nodes.

Please correct me, if I am wrong.

From my point of view, you should have to do many other thinks to resolve such problems. Please read the CEPH Wiki Article here from Proxmox.
 
For sure raid and 2 nodes are a problem. But why OSD disconect and reconect all time? Raid thing supose affect read/write osd daemon just stop respondind for some moments then return again...
 
Ceph isn't designed to run with a RAID controller, ceph makes his own redundancy. You loose performance and add additional layer between which makes higher latency.
I think you using the H330 and not the H430, the H330 has an Queue Depth of 256, which is not very low as some other Dell Controller but even not very high.

To solve such problems the first step is to change the setup to the basic requirements from ceph. Then you will see if you solved your problem with a correct setup or not.
 
You are Correct about H330 (its raid controller to R230).
Ok. Lets try remove raid layer... Next questions is How do that ?
- All hds are connected on H330, to make then visible to SO its need create a VD (i sopose only one disk per VD using raid 0)
- After that i will finish with 4 vds: 2 vds 1 TB, 2 vds 3 tb per server
- After add all vds to ceph, ceph will see near 8 tb space 4( per server) . How Ceph will create 2 data copys per server (to replicate Raid 1 redundance) ?
 
First you need to flash the Controller from IR to IT Mode, then you will have an HBA. But you need an different Disk (SD Card, USB Stick, PXE Boot etc.) for the Operating system. In IT mode all disks are directly exposed to the OS, so you don't neet to create an VD (which are here not really different to an raid, because an additional layer is still exists).

To create the redundancy, you have to change the crush map, where which replica will be stored. So if you have an pool with replica 3 and configure the crush map to not store the same PG on the same server as the other ones, then CEPH are not able to get in a healthy state. For this you need to do a replica of 2. CEPH will then distribute the files to booth servers, and every server has a copy of all the data. But at this point, you can not store more than around 40% per server. If an OSD failed, CEPH need to rebalance the data to the other existing one to get back in a healthy state.

Normally it is better to use more server with more bays. 3 - 6 OSD per server and a minimum of 3 servers should be good for the beginning.
 
almost all clear (sticks to OS..crush map and so...). But no clear about "flash the Controller from IR to IT Mode" : U say i need change controller firmware or is just some setting on controller ?
 
U say i need change controller firmware
Exactly. Most of the Dell Controllers are only an LSI Controller / Chip with another Layout or feature Sets. You can reflash these Controller to an "normal" LSI Controller, there you can change between the IR (integrated RAID) and IT (Initiator Target) Mode - per Firmware. You are not able to use the Dell Firmware after this, before you can do it again, you have to reflash the Controller back to an Dell brand. So you will be able to flash it back to an Dell, no worry about this :)

With IT Mode the Controller are not more able to give you the ability to create an RAID1 or something else. Normally you do not need to flash the Controller BIOS, this cost Boot time and normally an HBA is a stupid Controller which only connects the Harddrives from the Backplane to the OS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!