Search results

  1. J

    Use of memory ballooning for pruduction environment

    I'm in the process of migrating production Linux VMs (don't run any Windows VMs) from ESXi to Proxmox. The Linux VMs always were always running latest version of open-vm-tools. I never did turn off ballooning under ESXi because I never had issues. Now the Linux VMs are running under Promox with...
  2. J

    [SOLVED] VLAN configuration

    I use the following in production. YMMV. Then in the VM's Network VLAN Tag field, put in either 20, 30, 40 and using Bridge 'vmbr1'. # Configure Dell rNDC X540/I350 4P NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network...
  3. J

    Installation Proxmox (Dell r750 PowerEdge 24 disks) "Only shows my RAID"

    I never had good luck with mixed-mode on disk controllers. Either choose HW RAID or IT/HBA-mode. I just stick with what Proxmox officially supports. If using HW RAID then it's either EXT4 or XFS (I use this since it's a native 64-bit filesystem). If using IT/HBA-mode, I use 2 x small drives...
  4. J

    Question regarding Disk Write Cache

    This may help https://kylegrablander.com/2019/04/26/kvm-qemu-cache-performance/ I use 'writeback' for VM cache. Server RAM is like a big giant disk cache in RAM for the VM when using 'writeback'.
  5. J

    How to use vlanID during installation of proxmox

    With the influx of organizations migrating from VMware to Proxmox, ESXi already offers this option during installation. Offering this option will be one less friction pain point.
  6. J

    Migrating a bare metal server to a VM on proxmox

    Search online for P2V (physical to virtual) converters. Most common ones convert it to ESXi (vmdk) but 'qemu-img convert' can convert vmdk to qcow2 or raw format.
  7. J

    Proxmox installation - hard disks and PowerEdge R620

    If going to use either ZFS or Ceph, you need an IT-mode disk controller. 12th-gen Dell PERC controllers can be flashed to IT-mode with this guide at https://fohdeesha.com/docs/perc.html I've converted a fleet of 12th-gen Dells to IT-mode and all running either Ceph or ZFS.
  8. J

    Proxmox installation - hard disks and PowerEdge R620

    Dell BOSS-S1 cards are technically not supported on 13th-gen Dells but it does work. This server was previously an ESXi host doing backups which ESXi recognized the card during install. It does show up as a install target during Proxmox installation. The server that is installed is a 2U LFF...
  9. J

    [SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

    Since Dell BOSS cards are used to mirror the OS, I can confirmed that on the BOSS-S1 on a 13th-gen Dell, you can install Proxmox on it. I did use the CLI to configure a mirror. It does show up as an install target during Proxmox install. I used XFS as the file system. Before I did that, I...
  10. J

    Proxmox installation - hard disks and PowerEdge R620

    Considered best practice to mirror the OS drives with small drives. But with a 8-bay R620, yeah, you lose 2 drive bays. I previously used ZFS RAID-6 (still losing 2 drives but any two drives can fail before data is lost). I went ahead and mirrored the OS and setup ZFS RAID-50. One last option...
  11. J

    New Server - which CPU?

    2-node clusters will need a QDevice or any even number of hosts in a cluster. The QDevice will provide a vote to break a tie since each host will vote for itself. Ideally, you want an odd-number of servers for quorum, ie, 3, 5, 7, etc. QDevice can be any device. For example, I use a bare-metal...
  12. J

    Advice on Server Purchase and implementation of system

    Just make sure the solid-state storage is using power-loss protection (PLP) [enterprise] otherwise going get real bad IOPS.
  13. J

    [SOLVED] Windows VM Poor Performance on Ceph

    I don't use any production Ceph clusters using solid-state drives. Yup, it's all SAS 15K HDDs. Don't run Windows either, just all Linux VMs. If going to use solid-state drives, you want enterprise sold-state storage with power-loss protection (PLP). With that being said, I use the following VM...
  14. J

    [SOLVED] Network best practice?

    I use this config in production. YMMV. # Configure Dell rNDC X540/I350 Quad NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network traffic # auto lo iface lo inet loopback iface eno1 inet manual...
  15. J

    Recommendation for Datacenter grade switch for running Ceph

    For new, Arista. For less expensive option, Mikrotik. Pick your speed, medium (wired and/or optic), and number of ports. I use both without issues at 10GbE. They do make higher speed switches.
  16. J

    Migrate VMware VMs to Proxmox

    I used these guides: https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve/ https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/ You will need the .vmdk metafile and the flat.vmdk file for 'qemu-img convert' command.
  17. J

    2U 4 nodes 24 drives suggestions

    Depends on use case. I don't use HW RAID, so it's either ZFS for stand-alone or Ceph for cluster (3-node minium but I use 5-node). In either case, I use 2 small storage devices to mirror Proxmox via ZFS RAID-1. Then rest of storage for data/VMs.
  18. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    That's good info to know. I've changed the VM's disk cache to writeback and will monitor performance. I also have standalone ZFS servers and also changed the cache to writeback. Agree. You want enterprise sold-state storage with power-loss protection (PLP).
  19. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    My optimizations are for a SAS HDD environment. I don't use any SATA/SAS SSDs in the Promox Ceph clusters I manage. I also don't run any Windows VMs. All the VMs are Linux which are a mix of Centos 7 (EOL this year)/Rocky Linux (RL) 8 & 9/Alma Linux (AL) 8 & 9. For the RL/AL VMs, I set the...
  20. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    I run a 3-node full-mesh broadcast (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup) Ceph cluster without issues. I flashed the PERCs to IT-mode. Corosync, Ceph public & private, Migration network traffic all go through the full-mesh broadcast network. IOPS for...