Search results

  1. J

    H310 - IT mode needed?

    I run H310 flashed to IT-mode in production. Zero issues. And yes, queue depth does matter.
  2. J

    Advice on which HBA Card to buy

    Migrated a production 5-node 16-drive bay R730 VMware cluster over to Proxmox Ceph. Swapped out the PERC H730 for HBA330 for "true" HBA mode. Updated HBA330 to latest firmware. First two drives are Intel DC S3710 SATA boot drives using ZFS RAID-1. Rest of drives are OSDs. Workloads range from...
  3. J

    Migrated VMware ESXi of Linux VM to Proxmox 8.2.2. After Migrated -Booting Linux VMs into Rescue mode

    My reply also works for 8.2.x https://forum.proxmox.com/threads/d...t-works-with-vmware-pvscsi.144806/post-652170
  4. J

    Enterprise HW Config w/ Ceph

    You'll want to see my reply here https://forum.proxmox.com/threads/best-practices-for-setting-up-ceph-in-a-proxmox-environment.148790/post-673329 Since migrating off of VMware to Proxmox, no issues besides the typical drive dying and needing replacing. Will be migrating a 15-host VMware...
  5. J

    Hard Drive Best Practices

    I believe 64GBs is the minimum. Don't think 32GB is enough. I do have a server using 100GB SSDs to ZFS RAID-1 Proxmox. So, 128GB is plenty. Since ZFS is a volume manager just like LVM, I would skip the LVM and use ZFS. You can JDOD ZFS, ie, ZFS RAID-0. Of course, backup it up.
  6. J

    Hard Drive Best Practices

    Considered best practice to mirror the OS using RAID-1. In my case, I use two small drives to ZFS RAID-1 Proxmox itself. As for the rest of the storage, it depends on the use case. If a single drive, I do use ZFS RAID-0 (stripe), so can use snapshots, rollbacks, compression, error checking...
  7. J

    Issues Booting Proxmox 8.2.1 ISO in UEFI Mode on Dell R720

    I found installing Proxmox in UEFI mode on 12th-gen Dells is unpredictable. Luckily, the VMs are already in BIOS mode, so just installed Proxmox using BIOS mode. However on 13th-gen Dells, Proxmox UEFI install with Secure Boot enabled works fine. Go figure.
  8. J

    Reducing Ceph Write Latency

    I use SAS HDDs in production. Since they are meant to be used on HW RAID controllers with a BBU, their write cache is turned off. May want to check if the HDDs have their cache enabled. VMs range from databases to DHCP/PXE servers. Not hurting for IOPS. I use the following optimizations learned...
  9. J

    A Few Storage Questions from a PVE Newbie

    Migrated off TrueNAS SCALE to Proxmox because didn't have full CLI functionality. Used the LXC *Arr scripts from here https://tteck.github.io/Proxmox I am using privileged containers because didn't want to configure UID/GUID remapping. Using Homarr as the jumping point to other *Arr LXCs. Boot...
  10. J

    CEPH Offline Installation

    I use POM in production on stand-alone bare-metal servers (which also doubles as a PBS server) and in a Debian 12 VM. It works. Just make sure to change /etc/apt repo files to point to the POM server.
  11. J

    Hardware for Proxmox Cluster with CEPH Storage

    When I migrated 13th-gen Dells to Proxmox Ceph, I swapped out the PERC for a Dell HBA330 (which is a pure IT-mode controller). Then I used ZFS RAID-1 to mirror Proxmox. Rest of drives are OSDs. I don't think the HW RAID PERC can be configured in both HW RAID mode and pass-through mode at the...
  12. J

    From ESXI 6.5 to Proxmox - HomeServer - I have questions.

    I've used the Proxmox ESXi migration tool on production VMs with no issues. You'll need to disable session timeouts on the ESXi host(s). Make sure the VMs are off and NO snapshots. ZFS works fine on SATA drives. ZFS provides snapshots, rollbacks, compression, and error checking of data and...
  13. J

    Guidance for Setting Up Proxmox on Dell PowerEdge T550 Server

    Should show up in the Proxmox installer as a boot target. Was for me on a BOSS-S1 and showed up as DELLBOSS-VD.
  14. J

    ESXi Import Timeout

    You need to disable maxSessionCount and sessionTimeout on the ESXi host per https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Automatic_ESXi_Import:_Step_by_Step
  15. J

    They asked me for a CEPH deployment plan!

    Yeah, really. By default, SAS HDDs ship with write cache disabled. The reason being that these drives were meant to be used on a HW RAID controller with BBU. The HW RAID will do the caching on behalf of of the drives. So, when I converted the Dells over to Proxmox Ceph and replaced the HW RAID...
  16. J

    Import of Centos from Vmware fails

    See my reply at https://forum.proxmox.com/threads/debian-11-not-booting-with-virtio-scsi-single-but-works-with-vmware-pvscsi.144806/post-652170 It's still valid for Proxmox 8.2.x
  17. J

    They asked me for a CEPH deployment plan!

    For proof-of-concept, 3-nodes will suffice. For production, you really, really want a minimum of 5-nodes. That way, 2-nodes can fail and still have 3-nodes for quorum. I converted a fleet of 13th-gen Dells which used to run VMware vSphere over to Proxmox Ceph. Made sure all the nodes had the...
  18. J

    CPU recommendation for proxmox server

    Since Proxmox is Debian with a custom Ubuntu kernel, pretty much runs on a 64-bit CPU with Intel VT/AMD-V hardware virtualization. Have it running on Intel Sandy Bridge, Haswell, and Broadwell CPU generations with high CPU core counts with no issues.
  19. J

    New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

    Supposedly someone over at r/Proxmox got it working at 6.0 and 5.5. Can always manually copy over the .vmdk and .vmdk-flat files over and do a 'qemu-img convert'.
  20. J

    Best Practices for Setting Up Ceph in a Proxmox Environment

    I manage several production 5- and 7- node Proxmox Ceph clusters. Why 5 or 7 nodes? Well, with 3 nodes, you can only tolerate a single node failure and I believe because of lack of quorum, no writing of data will occur. Strongly suggest 5-nodes at minimum that way one can tolerate 2 node...