Yes, I haveThat card should be fine. Are you certain that you've selected the card properly? And that the boot devices are definitely not attached to that device?
Try booting the system from a live debian or Ubuntu USB drive.
I looked at those 16e cards, but thought these 8e cards would be enough. What are your system specs? This Dell R720XD is doing my plotting when I need & is the main farmer. I'm thinking of building a dedicated plotter that would run Ubuntu Server. I'm going to get another identical R720XD & make that a harvester to the current farmer. I could try this controller card in the dedicated plotter & try another one for this farmer / harvester. I wonder if the 16e would not have these issues for me...The 9207-8E is only a single controller card, with two ports exposing 4 lanes of SAS on each connector. The 9206-16e (which I have in my system at the moment) is basically two 9207-8E cards on the same board with a pcie switch.
That's a good setup, it sounds fast...My point was, you only have to pass through one device. if you're passing through two, then you are probably also passing through the controller your boot drives are connected to.
my system is a supermico X10DRI-F board, with
2x e5-2680 v4 cpus
128gb of ram
Mellanox cx3 dual port card
aforementioned 9206-16e card
RX 5700XT for my workstation VMs GPU
NEC/Renesas USB card for the workstation VM
quad NVME bifurcation card with 4x micron 2300 512GB drives for an all nvme raidz2 Zpool
2x 320gb drives in a zfs mirror for boot
2x 480gb Micron 5100 MAX for ceph DB devices
Attached to the 9206-16e, split evenly across two supermicro expander backplanes:
6x 500gb SATA drives for CEPH OSDs
8x 8TB 7200rpm SAS drives for large raidz2 zpool
It was that card... Something was not right with it. When I took it out, my system started working again.I wonder if my problem is that this card is in IT Mode:
LSI00300 IT Mode LSI 9207-8E 6Gb/s External PCI-E 3.0x8 Host Controller Card
I've started getting controller cards in IT mode from "Art of Server" on eBay. He's got a YouTube channel too. He flashes cards to IT mode & tests them in systems before sending them out. I've gotten two not working cards from other people. The one from him works perfectly in the 1st R720XD I setup for Chia farming with Proxmox. I am building a 2nd R720XD now & getting another Dell H710 card in IT mode from him. He might have something that could help you. He's been good about getting back to me when I ask questions.just pinging this thread to ask status (2 weeks later I think approx?)
I am asking because I'm trying to sort out a new-install proxmox-latest on a Dell R815 with a PERC h700 which also uses mpt3sas driver and I cannot get proxmox to work with the device. Even with addition of kernel boot flag at the start of proxmox installer, "mpt3sas.max_queue_depth=8000" there is some problem active. When I boot the same hardware with an ancient SysRescueBootMedia (ie, approx 10 year old USB Key/boot media image) the system boots perfectly, the raid card initializes fine with old driver, and I can see the mirror/Raid1 volume from linux just fine. So rather frustrating. Hoping to chase down how to make this thing work.
Put the card in different PCIE slot, I have seen this in another HPE server where "PCI-E SLOT-1" cannot be used for RAID/HBA card ( server stops at BIOS POST with error message ).It was that card... Something was not right with it. When I took it out, my system started working again.
There is no point of the "Cache and a BBU" solution, never was - every vendor silently dropped/dropping it, they realised the "SPOF - Single Point of Failure" in storage system - ( Example: every HDD/SSD has internal onboard cache 256/512/1024M already).Hi, thanks for the added thread content.
I just double checked to confirm,
Dell Perc H700 is an LSI 2108 chipset
as per Dell tech ref > https://i.dell.com/sites/csdocument...ets_Documents/en/perc-technical-guidebook.pdf
and
Dell Perc h200 is based on LSISAS2008/62114B1 I think. Dell tech docs don't come out and say this as clearly but a few online hits suggest this is the chipset. Dell claims it is hardware raid. I do know this one has much more lame performance than the perc h700. (H700 supports raid5 not just mirror, and has on-card ram cache and a BBU module connection possible. Pretty sure the H200 has no ram, at least not so obviously / if there is cache ram is soldered on board 'in disguise'. Definitely the h200 is the 'cheaper junior raid option' of these 2 cards.
Lots of fun when old chipsets/parts get de-supported. Arguably it is part of the deal with 'everything'.
Tim
You forget that most people here use consumer SSDs without PLP that can't cache sync writes without a HW raid with cache + BBU..but ok, when not willing to spend the additional money for a proper SSD with PLP you probably also won'T buy a HW raid card with cache and BBU.There is no point of the "Cache and a BBU" solution, never was - every vendor silently dropped/dropping it, they realised the "SPOF - Single Point of Failure" in storage system - ( Example: every HDD/SSD has internal onboard cache 256/512/1024M already).
Hardware raid is basically dead on Linux in a wold where zfs and mdadm exist, they are many times more reliable and faster than hardware raid, which is why almost nobody running proxmox is going to be using a raid card with a battery backed cache.You forget that most people here use consumer SSDs without PLP that can't cache sync writes without a HW raid with cache + BBU..but ok, when not willing to spend the additional money for a proper SSD with PLP you probably also won'T buy a HW raid card with cache and BBU.
yes, i'm not a fan of hardware raid at all anymore since zfs - but did you know this one?
https://bugzilla.kernel.org/show_bug.cgi?id=99171
afaik, proxmox folks don't recommend mdraid because of this and because of complicated troubleshooting.
i thought for a long time that mdraid was mature and proven and works without problems, but with this ticket still open and with the pointers from proxmox support that it may also be hard to troubleshoot or fix in problematic situations (what's the reason why they will not add mdraid to their installer), my trust in mdraid suffers