Search results

  1. J

    Esxi migration to proxmox

    While it's true 3-nodes is the minimum for a Ceph cluster, you can only lose 1 node before losing quorum. You'll really want 5-nodes. Ceph is a scale-out solution. More nodes/OSDs = more IOPS. Can lose 2 nodes and still have quorum. While converting the PERC to HBA-mode does work, I've had...
  2. J

    Storage for production cluster

    You need to really confirm that write cache enable is turned on via 'dmesg -t' output on each drive. If the write/read cache is disabled, it really kills the IOPS. While technically 3-nodes is indeed the bare minimum for Ceph, I don't consider it production worthy due to fact if you lose 1...
  3. J

    Proxmox VE 9.1 Installation on Dell Poweredge R630

    On 13G Dells PVE hosts running 6.17.2-2 kernel. I have UEFI, Secure Boot, X2APIC, I/OAT DMA enabled. I do NOT have SR-IOV enabled.
  4. J

    Storage for production cluster

    Been migrating 13G Dells VMware vSphere clusters over to Proxmox Ceph using SAS drives and 10GbE networking on isolated switches. Ceph is scale out solution, so more nodes = more IOPS. Not hurting for IOPS on 5-, 7-, 9-, 11-node clusters. Just like with vSAN, have homogeneous hardware (same CPU...
  5. J

    Shared Storage across 10GB Fiber for a ProxMox Cluster?

    Plenty of past posts on NOT running Promox on SD cards. It will cause instability and crashes.
  6. J

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Actually had to pin 6.14.11-4 kernel on PBS instances. 6.17.2-1 was giving intermittent issues on BOSS-S1.
  7. J

    Shared Storage across 10GB Fiber for a ProxMox Cluster?

    Disclaimer: I do NOT use ZFS shared storage in production. I use Ceph for shared storage. I do use ZFS on standalone servers. I ZFS RAID-1 to mirror Proxmox on small 76GB SAS drives. With that being said, your best bet is ZFS over iSCSI per https://pve.proxmox.com/wiki/Storage Plenty of blog...
  8. J

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    I had to actually pin the 6.17.2-1 kernel on a R530 BOSS-S1 PBS instance. It locks up with the 6.17.2-2 kernel. Something obviously changed with 6.17.2-2.
  9. J

    Fileserver recommendations

    I use the Proxmox VE Helper-Scripts (LXC) to provide NFS/CIFS/Samba file sharing https://community-scripts.github.io/ProxmoxVE No issues sharing ZFS pools. There are other scripts to manage media and other services.
  10. J

    [SOLVED] Dell PowerEdge R630 reboots immediately on kernel 6.17.2-2-pve (PWR2262 / CPU0704 / UEFI0078)

    Running latest kernel (6.17.2-2) without issues on 13G Dells. Also running fine on 12G, 11G, 10G Dells too. These 13G Dells do have the latest firmware and BIOS. UEFI, Secure Boot, X2APIC, and IOAT DMA are enabled. I do NOT have SR-IOV enabled.
  11. J

    The New Guy to Homelabs based on linux

    A long time ago, in a galaxy far, far away, I use to run a media server under Windows but the constant patching kept breaking my instance. Migrated over to Arch Linux and mounted the NTFS drive read only under Linux and copied over the media to a ext4 filesystem. This obviously took awhile...
  12. J

    [SOLVED] Debian 11 not booting with "VirtIO SCSI Single" but works with "VMware PVSCSI"

    During the next kernel update, dracut will only install the required drivers required for booting up.
  13. J

    10Gb network adapter

    I'm still on Proxmox 8 at home. I do NOT use a transceiver at home. I use a direct-attached cable (DAC) from the NIC to the switch. I believe the issues with the ConnectX-3 not working with Proxmox 9 may have to do with transceiver compatibility. Won't know until I migrate my server during...
  14. J

    How to install Proxmox with GRUB?

    True this. The Proxmox team may want to consider just using GRUB bootloader exclusively in future versions like how other operating systems do it. And to avoid the issues from systemd-boot to GRUB migrations. Can confirm that the GRUB bootloader can still boot with Secure Boot disabled. Have...
  15. J

    Proxmox Virtual Environment 9.1 available!

    No issues on 13th-gen Dells. Time to upgrade the 12th-, 11th-, 10th-gen Dells. On 13th-gen Dells, I have UEFI, Secure Boot, X2APIC, I/OAT DMA enabled. SR-IOV disabled.
  16. J

    How to change to 10Gbps NIC Card option for better migration performance

    You'll probably want to use the alternative method created by member PwrBank to overcome the built-in EXSi throttling https://github.com/PwrBank/pve-esxi-import-tools/tree/direct-send More info about this method at https://forum.proxmox.com/threads/import-from-esxi-extremely-slow.173450
  17. J

    full mesh network for ceph configuration

    Assuming it's a 3-node Ceph cluster with no switch per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Example I put the Ceph public, private, and Corosync network traffic on this network. I also set the datacenter migration network to use this network (either via GUI or CLI) and...
  18. J

    Ceph Cluster - Slow performance

    If this 3-node cluster is never going to be expanded, create a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Example and https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup This setup removes the switch and put the...
  19. J

    How many nodes in a single cluster with Ceph requirement?

    I use Dell R630s in production as Proxmox Ceph clusters. These were converted from VMware/vSphere. They all have the same hardware (CPU 2 x 2650v4, Storage SAS 10K, Storage controller HBA330, RAM 512GB, NIC Intel X550 10GbE) running latest firmware. Ceph & Corosync network traffic on isolated...
  20. J

    About Proxmox VE & PBS features

    May be the issue is with the megaraid_sas driver not able to use "mixed-mode" RAID configuration (RAID and HBA [passthrough]-mode) is the root cause. No idea since I don't use mixed-mode. Still stand with my recommendation on getting a used Dell HBA330 controller. They are cheap to get. Got one...