Great! I We are using the backup tool (vzdump files) but have only tried to recover to the the same/existing VM. Good to know it has what is needed within. I'll try to restore one of these to a different PVE server to see what it looks like. Thanks!
Assumming these hypothetical circumstances:
- Standalone PVE server (no HA/cluster)
- rpool crashed, server won't boot, PVE can't launch.
- Must build new standalone PVE from scratch
- Have recent full backups of each VM hosted on the failed PVE
I assuming one will be missing (and requiring)...
So for anyone that finds this later on....
CHELSIO T420-CR (and I assume entire T4, T5, T6 series associated w/this driver set) seems to work OOB with PVE 6.x
Once I assigned the NIC to a new vmbr# in proxmox, and rebooted the system, I was able to get link.
Ok looking in DMESG I found this:
[ 3.789865] cxgb4 0000:15:00.4: Direct firmware load for cxgb4/t4-config.txt failed with error -2
[ 4.541838] cxgb4 0000:15:00.4: Successfully configured using Firmware Configuration File "Firmware Default", version 0x0, computed checksum 0x0
I'm using a direct fiber optic cable between PVE and switch port, but not getting link:
TX -> RX
RX -> TX
I've tried swapping the cable TX/RX around just in case, but no change.
Reminds me of the "allow_unsupported_sfp" issue with Intel 10GBE and "unsupported" SFP+ modules but I am not...
**WARNING** to future low budget 10GBE NIC purchasers.... Chelsio T420-CR does not work OOB here with PVE 6.x. If you're finding these old threads, be sure to research well before you buy.
This was incorrect. The Chelsio T420-CR (and likely T5xx and T6xx from same driver series) did work...
Any ideas here?
We bought a few of these cards because they were low cost and could be used OOB with FreeBSD, but apparently with Proxmox they may not OOB ready
These Chelsio's CXGBE seems like it should be supported:
I am seeing the message in startup:
This would lead you to believe it is updating the firmware on the card right?
Unfortunately no other messages for cxgb4 and I do not see them as interfaces in the system.
Based on you reply I dug around a little more.
I don't think shareiscsi is built into the ZFS used by proxmox unfortunately. Properties that allow you to set 'sharenfs' and 'sharesmb' exist though.
I've checked in both PVE 5.4-3 and 6.x.
I'll try to work through the german doc...
I have an environment like this...
Hosts several linux VM's. These VM's have their storage on a mirrored pair of SSD's. There is also a DAS shelf connected to this server (PVE-Standalone1) that I would like to set up as ZFS and share the pools or zvols with with...
It seems like the options in Datacenter > storage > are for adding storage FROM other locations to proxmox (for proxmox to use).
But I have big array of disks (ZFS!) and I would like to share some of it with a windows server that is on the network (10gbps links).
is it somehow possible to...
When browsing the fedora virtio iso disc, one guide said to dig through \NETKVM\... on the disc.
According to PVE docs:
First time installing a Windows guest in PVE. Found a couple different guides that point o using the Fedora VirtIO drivers during custom installation.
- PVE 6.0-5
- Windows Server 2k19 (using the Win10/2016 mode in PVE)
- SCSI controller =...
We have a couple of older Windows 7 Pro based application servers that can not be virtualized, but they rely on a considerable amount of storage.
We're started using PVE for a few other things already and haved starting to get used to using ZFS within the PVE environment. In the interest of...