Search results

  1. J

    Proxmox Offline Mirror Pick the Latest Snapshot

    I use this script by Thomas https://forum.proxmox.com/threads/proxmox-offline-mirror-released.115219/#post-506894
  2. J

    [Help] Dell R740 + Broadcom BCM5720 NDC - Ports Active (Lights On) but Not Detected by Proxmox

    I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions. The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver. Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware version which is currently v23.0.0 dated 20Sep2024...
  3. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    It's these one-off situations with the megaraid_sas driver and just installing a Dell HBA330 using the much simpler mpt3sas driver will avoid all this drama. LOL. In addition, the Dell HBA330 is very cheap to get.
  4. J

    Esxi migration to proxmox

    While it's true 3-nodes is the minimum for a Ceph cluster, you can only lose 1 node before losing quorum. You'll really want 5-nodes. Ceph is a scale-out solution. More nodes/OSDs = more IOPS. Can lose 2 nodes and still have quorum. While converting the PERC to HBA-mode does work, I've had...
  5. J

    Storage for production cluster

    You need to really confirm that write cache enable is turned on via 'dmesg -t' output on each drive. If the write/read cache is disabled, it really kills the IOPS. While technically 3-nodes is indeed the bare minimum for Ceph, I don't consider it production worthy due to fact if you lose 1...
  6. J

    Proxmox VE 9.1 Installation on Dell Poweredge R630

    On 13G Dells PVE hosts running 6.17.2-2 kernel. I have UEFI, Secure Boot, X2APIC, I/OAT DMA enabled. I do NOT have SR-IOV enabled.
  7. J

    Storage for production cluster

    Been migrating 13G Dells VMware vSphere clusters over to Proxmox Ceph using SAS drives and 10GbE networking on isolated switches. Ceph is scale out solution, so more nodes = more IOPS. Not hurting for IOPS on 5-, 7-, 9-, 11-node clusters. Just like with vSAN, have homogeneous hardware (same CPU...
  8. J

    Shared Storage across 10GB Fiber for a ProxMox Cluster?

    Plenty of past posts on NOT running Promox on SD cards. It will cause instability and crashes.
  9. J

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    Actually had to pin 6.14.11-4 kernel on PBS instances. 6.17.2-1 was giving intermittent issues on BOSS-S1.
  10. J

    Shared Storage across 10GB Fiber for a ProxMox Cluster?

    Disclaimer: I do NOT use ZFS shared storage in production. I use Ceph for shared storage. I do use ZFS on standalone servers. I ZFS RAID-1 to mirror Proxmox on small 76GB SAS drives. With that being said, your best bet is ZFS over iSCSI per https://pve.proxmox.com/wiki/Storage Plenty of blog...
  11. J

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    I had to actually pin the 6.17.2-1 kernel on a R530 BOSS-S1 PBS instance. It locks up with the 6.17.2-2 kernel. Something obviously changed with 6.17.2-2.
  12. J

    Fileserver recommendations

    I use the Proxmox VE Helper-Scripts (LXC) to provide NFS/CIFS/Samba file sharing https://community-scripts.github.io/ProxmoxVE No issues sharing ZFS pools. There are other scripts to manage media and other services.
  13. J

    [SOLVED] Dell PowerEdge R630 reboots immediately on kernel 6.17.2-2-pve (PWR2262 / CPU0704 / UEFI0078)

    Running latest kernel (6.17.2-2) without issues on 13G Dells. Also running fine on 12G, 11G, 10G Dells too. These 13G Dells do have the latest firmware and BIOS. UEFI, Secure Boot, X2APIC, and IOAT DMA are enabled. I do NOT have SR-IOV enabled.
  14. J

    The New Guy to Homelabs based on linux

    A long time ago, in a galaxy far, far away, I use to run a media server under Windows but the constant patching kept breaking my instance. Migrated over to Arch Linux and mounted the NTFS drive read only under Linux and copied over the media to a ext4 filesystem. This obviously took awhile...
  15. J

    [SOLVED] Debian 11 not booting with "VirtIO SCSI Single" but works with "VMware PVSCSI"

    During the next kernel update, dracut will only install the required drivers required for booting up.
  16. J

    10Gb network adapter

    I'm still on Proxmox 8 at home. I do NOT use a transceiver at home. I use a direct-attached cable (DAC) from the NIC to the switch. I believe the issues with the ConnectX-3 not working with Proxmox 9 may have to do with transceiver compatibility. Won't know until I migrate my server during...
  17. J

    How to install Proxmox with GRUB?

    True this. The Proxmox team may want to consider just using GRUB bootloader exclusively in future versions like how other operating systems do it. And to avoid the issues from systemd-boot to GRUB migrations. Can confirm that the GRUB bootloader can still boot with Secure Boot disabled. Have...
  18. J

    Proxmox Virtual Environment 9.1 available!

    No issues on 13th-gen Dells. Time to upgrade the 12th-, 11th-, 10th-gen Dells. On 13th-gen Dells, I have UEFI, Secure Boot, X2APIC, I/OAT DMA enabled. SR-IOV disabled.
  19. J

    How to change to 10Gbps NIC Card option for better migration performance

    You'll probably want to use the alternative method created by member PwrBank to overcome the built-in EXSi throttling https://github.com/PwrBank/pve-esxi-import-tools/tree/direct-send More info about this method at https://forum.proxmox.com/threads/import-from-esxi-extremely-slow.173450
  20. J

    full mesh network for ceph configuration

    Assuming it's a 3-node Ceph cluster with no switch per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Example I put the Ceph public, private, and Corosync network traffic on this network. I also set the datacenter migration network to use this network (either via GUI or CLI) and...