Yeah, clicking apply in the HA SDN tab just says there are errors on the host in question, but absolutely nothing appears in /var/log/syslog or dmesg -w; . Additionally, the appearance of ip link show for the two SDN devices shows no differences.
I've got 3 nodes in a cluster. As far as I'm aware, all three of them have all needed packages installed. I'm not able to look at logs when I'd expect to see anything, and now they're so noisy I have no idea where to look. I can't just reboot this host, it has the PCIE Passthrough devices and...
So I'm still very much in the lab stage and pre-production here, and I'm still growing storage and have a maintenance window every month where I have to shut down everything except the two VMs that provide the storage (The SAN and NAS nodes). Obviously, when the SAN goes down for updates...
I found that from way back in 2016, I'd be very surprised if it isn't already patched in TrueNAS Core, pfSense, and Upstream FreeBSD, especially since the pfSense download I have is from 2022-02-14.
I unchecked PCI-Express, and I even tried changing the machine type back to i440fx, and the error message is the exact same error message. No change, no success.
So I'm trying to evaluate SR-IOV on a test server (Dell R620, dual E5-2670v2, SR-IOV, VT-D, etc all enabled) and everything works on a Linux guest, but I'm getting the following error whenever I attempt to boot up a BSD appliance guest.
The NIC is an Intel X520 based Dell rNDC, and AFAIK is...
This happens from the installer ISO as of downloads on Friday 2022-05-13. I don't have version numbers handy. Install -> Update sounds like a viable workaround to me. Thank you m.frank.
I'm trying to evaluate some Solaris based programs (OmniOS and NexentaStor) that are based on current Solaris Kernels. The long term goal is to pass through an HBA to these systems in PCIE mode, so I would want the Q35 VM mode. However, when configuring these VMs with a Q35 machine type, both...
This is not related to the SAS controller. This is a bug in the iSCSI target and/or QEMU's iSCSI initiator implementation. I've opened a bug with Proxmox to resolve this. https://bugzilla.proxmox.com/show_bug.cgi?id=4046
Dude, fibre channel is awesome! I wish they would extend the zfs over iscsi to be zfs over fc with npiv portability. I don't care if the npiv is attached to the VM sriov style or to the host qemu. Either way, it would truly be awesome and would enable truly enterprise level SAN support in...
Went to try it out today. Project looks dead. D. E. D. Dead. Repo is down, project owner not responding to tickets or inquiry as to when it'll be back.
Meaning no offense, but replacing a common general purpose server os with an obscure general purpose os, while probably very stable and suitable at the enterprise level, is hardly a step up for the smb/soho environment that's looking for a set it and forget it appliance. It's the sort of thing...
I didn't follow any documentation or guide. I read the included manual to make sure I was pasting the correct values in the correct boxes, but that's it. Targetcli is pretty basic really, once you've used it for a few years
So I'm looking into using ZFS over iSCSI, and I'm trying to find a good target. I've tried both TrueNAS Core and TrueNAS Scale, but both fail on a missing CLI utility. Being an appliance, I don't trust them to continue functioning through an update if I forcibly install those CLI utilities...
So some of my VMs take longer to start than others. Typically if I either pass in a PCIE device or use a lot of memory, but most frequently when I do both. Example output is below, but the key error is: failed: got timeout.
If I run the command manually, it always works out and loads...
So I'm pretty sure I know the answer. I'm building a very small cluster for now. Two nodes. One hosts the NAS and SAN and a few other 'infrastrcture-y' things. The other hosts most of the VMs that I want to boot *from* the SAN. I can manually migrate them from the service node to the infra...
Is it possible at this time to configure a proxmox cluster such that a host machine can boot over the SAN (iSCSI or NFS) using a genericized, Read Only image, then once it boots, it gets its identity information via DHCP or TFTP or similar to know its hostname and any other essentials, then it...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.