Search results

  1. J

    New to Proxmox, need a new server

    Proxmox runs on top of Debian, so if it's supported on Debian, it should work on Proxmox. Proxmox will definitely burn out any flash storage (that includes SD and consumer SSD) if it's not enterprise grade. I use 2 x SAS HDDs for Proxmox OS itself being mirrored by ZFS RAID-1. Then use...
  2. J

    1 server, 3 GPU's - is it possible?

    This post is for Intel iGPU. Maybe it can give you hints for AMD https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/
  3. J

    We want to buy new hardware. What's best ?

    Since Proxmox runs on top of Debian and not a proprietary kernel, I'm partial to Supermicro blade servers. Can't get more generic than that. More info at https://www.supermicro.com/en/products/blade May also want to check out their Twin series of servers which supports nodes.
  4. J

    [SOLVED] Imported ESXi CentOS VM does not boot

    For RHEL and derivatives, on the original VM run the following: # dracut --force --verbose --no-hostonly
  5. J

    1 server, 3 GPU's - is it possible?

    This may help but then again the GPUs are discrete https://www.youtube.com/watch?v=pIdCV1H1_88
  6. J

    [SOLVED] Kernel panic installing rocky or almalinux

    Yes, it's safe to change it. I believe you have to shutdown the VM, change its CPU type, and power it back on.
  7. J

    [SOLVED] Kernel panic installing rocky or almalinux

    If the nodes in your cluster have the same CPU family type, live migration should work with the VM CPU type set to 'host'. For example, I can live migrate between a R720 and R820 because they both have Sandy Bridge CPUs.
  8. J

    Can I move a CEPH disk between nodes?

    Can you tolerate downtime? Just be best to backup the VMs (with PBS perferably) and data and re-install PVE with Ceph Quincy.
  9. J

    Proxmox7 supported raid controller

    It should since the H330 uses a LSI 3008 SAS chip.
  10. J

    KVM to Proxmox convert initramfs failed boot centos7

    For RHEL and derivatives, on the original VM run the following: # dracut --force --verbose --no-hostonly The above was required to migrate from ESXi to KVM. Should work for KVM to KVM.
  11. J

    [SOLVED] Kernel panic installing rocky or almalinux

    The root cause of this issue is the compilation of RHEL 9 and it's derivatives to use the x86-64-v2 instruction set https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level#background_of_the_x86_64_microarchitecture_levels More...
  12. J

    What's Next?

    +1 ESXi already does this and I think XCP-NG.
  13. J

    disk IO slow down proxmox system

    This is what I use to optimize IOPS: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM VirtIO Multiqueue to number of cores/vCPUs If using Linux: Set Linux...
  14. J

    Why are my VMs so slow?

    This is what I use to optimize IOPS: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM VirtIO Multiqueue to number of cores/vCPUs If using Linux: Set Linux...
  15. J

    Will a Dell PowerEdge R720 be up to the task?

    Per https://www.dell.com/support/manuals/en-us/poweredge-r720/720720xdom/system-memory?guid=guid-7550b0f0-b658-4f09-a3f8-668d9ced36ae&lang=en-us, LRDIMM not supported on R720s with 3.5 drives. I do know it works fine with regular RDIMMs.
  16. J

    [SOLVED] 10GbE IOPS different on host vs VM

    Thanks. That fixed the network IOPS issue.
  17. J

    [SOLVED] 10GbE IOPS different on host vs VM

    Yes, Im using VirtIO for VM networking. What value should I use the VM's VirtIO Multiqueue? I think I read it should match the number of cores of the CPU?
  18. J

    Advice on Ceph for Homelab

    I have a 3-node Ceph cluster running on 13-year old server hardware. It's using full-mesh broadcast bonded 1GbE networking working just fine. That's right, the Ceph public, private, and Corosync traffic over 1GbE networking just fine. There is even a 45Drives blog on using 1GbE networking in...
  19. J

    [SOLVED] 10GbE IOPS different on host vs VM

    I have a standalone Dell R620 ZFS host with 10GbE networking. I get the full network IOPS when doing a wget of a file on the R620. However, when I run the same wget inside a VM, I do NOT get the same network IOPS. Maybe I get about 60-75% network IOPS. Anything on the host and/or VM settings...
  20. J

    10 Gb card : Broadcom 57810 vs Intel X520

    I use the Dell X540/I350 rNDC (rack network daughter card) in a 5-node Ceph cluster without issues. Since the R320 doesn't have a rNDC slot, you need to get a PCIe one. I do know that Mellanox Connect-X3 is well supported. I stick with Intel NICs except for X710. Stay away from them.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!