Yeah, clicking apply in the HA SDN tab just says there are errors on the host in question, but absolutely nothing appears in /var/log/syslog or dmesg -w; . Additionally, the appearance of ip link show for the two SDN devices shows no differences.
I've got 3 nodes in a cluster. As far as I'm aware, all three of them have all needed packages installed. I'm not able to look at logs when I'd expect to see anything, and now they're so noisy I have no idea where to look. I can't just reboot this host, it has the PCIE Passthrough devices and...
So I'm still very much in the lab stage and pre-production here, and I'm still growing storage and have a maintenance window every month where I have to shut down everything except the two VMs that provide the storage (The SAN and NAS nodes). Obviously, when the SAN goes down for updates...
I found that from way back in 2016, I'd be very surprised if it isn't already patched in TrueNAS Core, pfSense, and Upstream FreeBSD, especially since the pfSense download I have is from 2022-02-14.
I unchecked PCI-Express, and I even tried changing the machine type back to i440fx, and the error message is the exact same error message. No change, no success.
So I'm trying to evaluate SR-IOV on a test server (Dell R620, dual E5-2670v2, SR-IOV, VT-D, etc all enabled) and everything works on a Linux guest, but I'm getting the following error whenever I attempt to boot up a BSD appliance guest.
The NIC is an Intel X520 based Dell rNDC, and AFAIK is...
This happens from the installer ISO as of downloads on Friday 2022-05-13. I don't have version numbers handy. Install -> Update sounds like a viable workaround to me. Thank you m.frank.
I'm trying to evaluate some Solaris based programs (OmniOS and NexentaStor) that are based on current Solaris Kernels. The long term goal is to pass through an HBA to these systems in PCIE mode, so I would want the Q35 VM mode. However, when configuring these VMs with a Q35 machine type, both...
This is not related to the SAS controller. This is a bug in the iSCSI target and/or QEMU's iSCSI initiator implementation. I've opened a bug with Proxmox to resolve this. https://bugzilla.proxmox.com/show_bug.cgi?id=4046
Dude, fibre channel is awesome! I wish they would extend the zfs over iscsi to be zfs over fc with npiv portability. I don't care if the npiv is attached to the VM sriov style or to the host qemu. Either way, it would truly be awesome and would enable truly enterprise level SAN support in...
Went to try it out today. Project looks dead. D. E. D. Dead. Repo is down, project owner not responding to tickets or inquiry as to when it'll be back.
Meaning no offense, but replacing a common general purpose server os with an obscure general purpose os, while probably very stable and suitable at the enterprise level, is hardly a step up for the smb/soho environment that's looking for a set it and forget it appliance. It's the sort of thing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.