Several things you can try:
1. Update firmware and/or BIOS, Dell lists the firmware for X710 at 22.5.7 and BIOS at 2.21.2
2. Turn off SR-IOV and/or PXE everywhere in the BIOS. Confirm NIC is not disabled in BIOS
3. Boot up SystemRescueCD and confirm /var/log/messages and/or 'dmesg' sees the NIC...
Just updated to the latest firmware on 13th-gen Dells and did a factory reset.
This is going to be a 5-node Ceph cluster which was previously running VMware.
I've noticed that SR-IOV is enabled in the BIOS.
I do know that under ESXi, it's highly recommended to turn off SR-IOV.
Should it also...
I use this guide https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/
You need both the .vmdk and -flat.vmdk file.
Actual command is "qm importdisk <id> .vmdk zfs -format raw". It won't work by using the -flat.vmdk file.
Was able to stand up a PBS 3 instance and it was able to backup PVE 7 hosts.
It was the same using an existing PBS 2 instance to backup PVE 8 hosts.
So, no issues using PBS2/3 on PVE7/8.
I'm guessing the Samsung PM897 is an enterprise SSD with PLP (power-loss prevention) otherwise consumer SSDs will burn out.
May want to look into a full-mesh broadcast setup for Ceph.
You'll be better off flashing the 12th-gen Dell PERCs to IT-mode via https://fohdeesha.com/docs/perc.html
I already have a 5-node R720 Ceph cluster running without issues.
I use two small drives for mirroring Proxmox using ZFS RAID-1.
I asked the same question at https://forum.proxmox.com/threads/pbs-2-x-vm-restore-to-pve-8.142262/#post-638181
So, it should work.
As an extra precaution, I plan on backing up the VMs to external HD.
Currently have a PBS 2.x bare-metal server backing up VMs on a PVE 7 servers.
The plan is to clean install the PVE 7 servers to PVE 8.
Can PVE 8 read PBS 2.x VM backups?
If that is the case, then I can clean install PBS 3.x on the existing PBS 2.x bare-metal server.
For new, I like Supermicro with their embedded (Atom/Xeon-D) motherboards. Can optionally get 10GbE and SAS controller. ASRock Rack also have embedded CPU motherboards.
For used, can't beat used 13th-gen Dells. For an even cheaper option (not recommended), 12th-gen Dells. Comes in either 1U or...
Per https://pve.proxmox.com/pve-docs/images/screenshot/gui-create-vm-memory.png, I just enter the VM's Memory it will use (using power of 2, ie, 2048, 4096, 8192, etc). I don't set a Minimum Memory limit. By default, Ballooning Device is enabled. All the Linux VMs do a have a swap partition of...
I'm in the process of migrating production Linux VMs (don't run any Windows VMs) from ESXi to Proxmox. The Linux VMs always were always running latest version of open-vm-tools.
I never did turn off ballooning under ESXi because I never had issues. Now the Linux VMs are running under Promox with...
I use the following in production. YMMV. Then in the VM's Network VLAN Tag field, put in either 20, 30, 40 and using Bridge 'vmbr1'.
# Configure Dell rNDC X540/I350 4P NIC card with 10GbE active and 1GbE as backup
# VLAN 10 = Management network traffic
# VLAN 20 30 40 = VM network...
I never had good luck with mixed-mode on disk controllers. Either choose HW RAID or IT/HBA-mode.
I just stick with what Proxmox officially supports. If using HW RAID then it's either EXT4 or XFS (I use this since it's a native 64-bit filesystem).
If using IT/HBA-mode, I use 2 x small drives...
This may help https://kylegrablander.com/2019/04/26/kvm-qemu-cache-performance/
I use 'writeback' for VM cache. Server RAM is like a big giant disk cache in RAM for the VM when using 'writeback'.
With the influx of organizations migrating from VMware to Proxmox, ESXi already offers this option during installation. Offering this option will be one less friction pain point.
Search online for P2V (physical to virtual) converters. Most common ones convert it to ESXi (vmdk) but 'qemu-img convert' can convert vmdk to qcow2 or raw format.
If going to use either ZFS or Ceph, you need an IT-mode disk controller. 12th-gen Dell PERC controllers can be flashed to IT-mode with this guide at https://fohdeesha.com/docs/perc.html
I've converted a fleet of 12th-gen Dells to IT-mode and all running either Ceph or ZFS.
Dell BOSS-S1 cards are technically not supported on 13th-gen Dells but it does work. This server was previously an ESXi host doing backups which ESXi recognized the card during install. It does show up as a install target during Proxmox installation.
The server that is installed is a 2U LFF...
Since Dell BOSS cards are used to mirror the OS, I can confirmed that on the BOSS-S1 on a 13th-gen Dell, you can install Proxmox on it. I did use the CLI to configure a mirror. It does show up as an install target during Proxmox install. I used XFS as the file system.
Before I did that, I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.