Search results

  1. J

    Proxmox on Dell PowerEdge T140

    Yes, that will work. I don't deal with tower servers but with rack-mounted 4-drive servers, I use RAID-10 for both Proxmox and VMS. Not considered best practice but wanted the IOPs. The VMs are backed up to a separate bare-metal Proxmox Backup Server.
  2. J

    Proxmox on Dell PowerEdge T140

    May want to configure that S140 for AHCI mode. Proxmox can use SATA/SAS ports just fine.
  3. J

    Enterprise Hypervisors Architecture

    Seeing that you have non-identical hardware, using the HPE as a SAN should work. May want to use ESOS [Enterprise Storage OS] (esos-project.com) for the SAN software. Then setup the IBMs as as PVE cluster. If the hardware was identical, I would recommend Ceph. You really, really want identical...
  4. J

    [SOLVED] Migrate PVE to a new drive

    It should. Linux supports at least READ access to lots of filesystems.
  5. J

    [SOLVED] Debian 11 not booting with "VirtIO SCSI Single" but works with "VMware PVSCSI"

    From vCenter, you edit the VM settings and remove networking. Or use the ESXi host UI and do it from there.
  6. J

    [SOLVED] Debian 11 not booting with "VirtIO SCSI Single" but works with "VMware PVSCSI"

    Just migrated half-dozen Linux RHEL clones VMs from ESXi 7.x to Proxmox 8.1.x. The steps are: 1) Remove open-vm-tools from ESXi VM 2) Install qemu-guest-agent on ESXi VM 3) Remove ESXi networking from ESXI Linux VM 4) Remove ESXi Linux VM networking config file 5) Run as root 'dracut -fv -N...
  7. J

    Intel X710-DA4 setup help

    May have to do a full system reset. Go into the Lifecycle controller via F10 and goto Hardware Configuration -> Re-purpose or Retire. This may "unstuck" the rNDC. After this, boot with Arch Linux and see what /var/log/messages and dmesg says. Also check to see what the iDRAC web interface...
  8. J

    Intel X710-DA4 setup help

    Several things you can try: 1. Update firmware and/or BIOS, Dell lists the firmware for X710 at 22.5.7 and BIOS at 2.21.2 2. Turn off SR-IOV and/or PXE everywhere in the BIOS. Confirm NIC is not disabled in BIOS 3. Boot up SystemRescueCD and confirm /var/log/messages and/or 'dmesg' sees the NIC...
  9. J

    SR-IOV enabled for live migration good idea?

    Just updated to the latest firmware on 13th-gen Dells and did a factory reset. This is going to be a 5-node Ceph cluster which was previously running VMware. I've noticed that SR-IOV is enabled in the BIOS. I do know that under ESXi, it's highly recommended to turn off SR-IOV. Should it also...
  10. J

    VMware .vmdk to Proxmox VE 8 VMs

    I use this guide https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/ You need both the .vmdk and -flat.vmdk file. Actual command is "qm importdisk <id> .vmdk zfs -format raw". It won't work by using the -flat.vmdk file.
  11. J

    Backup Server compatibility between versions.

    Was able to stand up a PBS 3 instance and it was able to backup PVE 7 hosts. It was the same using an existing PBS 2 instance to backup PVE 8 hosts. So, no issues using PBS2/3 on PVE7/8.
  12. J

    Backup Server compatibility between versions.

    Please post to confirm that PBS 3 can backup PVE 7 hosts. If that is that is the case, I'll go that route.
  13. J

    hardware renewal for three node PVE/Ceph cluster

    I'm guessing the Samsung PM897 is an enterprise SSD with PLP (power-loss prevention) otherwise consumer SSDs will burn out. May want to look into a full-mesh broadcast setup for Ceph.
  14. J

    Install Ceph on dell PowerEdge 720 with perc

    You'll be better off flashing the 12th-gen Dell PERCs to IT-mode via https://fohdeesha.com/docs/perc.html I already have a 5-node R720 Ceph cluster running without issues. I use two small drives for mirroring Proxmox using ZFS RAID-1.
  15. J

    Backup Server compatibility between versions.

    I asked the same question at https://forum.proxmox.com/threads/pbs-2-x-vm-restore-to-pve-8.142262/#post-638181 So, it should work. As an extra precaution, I plan on backing up the VMs to external HD.
  16. J

    PBS 2.x VM restore to PVE 8?

    Currently have a PBS 2.x bare-metal server backing up VMs on a PVE 7 servers. The plan is to clean install the PVE 7 servers to PVE 8. Can PVE 8 read PBS 2.x VM backups? If that is the case, then I can clean install PBS 3.x on the existing PBS 2.x bare-metal server.
  17. J

    New Proxmox server rack

    For new, I like Supermicro with their embedded (Atom/Xeon-D) motherboards. Can optionally get 10GbE and SAS controller. ASRock Rack also have embedded CPU motherboards. For used, can't beat used 13th-gen Dells. For an even cheaper option (not recommended), 12th-gen Dells. Comes in either 1U or...
  18. J

    Use of memory ballooning for pruduction environment

    Per https://pve.proxmox.com/pve-docs/images/screenshot/gui-create-vm-memory.png, I just enter the VM's Memory it will use (using power of 2, ie, 2048, 4096, 8192, etc). I don't set a Minimum Memory limit. By default, Ballooning Device is enabled. All the Linux VMs do a have a swap partition of...
  19. J

    Use of memory ballooning for pruduction environment

    I'm in the process of migrating production Linux VMs (don't run any Windows VMs) from ESXi to Proxmox. The Linux VMs always were always running latest version of open-vm-tools. I never did turn off ballooning under ESXi because I never had issues. Now the Linux VMs are running under Promox with...
  20. J

    [SOLVED] VLAN configuration

    I use the following in production. YMMV. Then in the VM's Network VLAN Tag field, put in either 20, 30, 40 and using Bridge 'vmbr1'. # Configure Dell rNDC X540/I350 4P NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network...