Search results

  1. J

    Hardware recomendation for painles update?

    Best bang for the currency will be used enterprise servers. Specifically 13th-gen Dells. It has built-in drive controller which can used as either IR or IT mode (need IT mode for ZFS/Ceph) and a built-in rNDC (rack network daughter card) upgradable to 10GbE networking (fiber or wired or both)...
  2. J

    Ceph performance issue

    I use the following optimizations in a 5-node 12th-gen Dell cluster using SAS drives: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM CPU...
  3. J

    PBS with HDDs

    Granted that flash is the way to go but I do backup two Ceph clusters with a Dell R200 and Dell R620 using SAS drives. These Dells are decommissioned and still functional so they make great PBS servers. Not the fastest but it does backup/restore just fine. PBS benchmarks...
  4. J

    ceph production with 3 nodes and SAS HDD disks?

    It's true that you need a minimum of 3-nodes for Ceph but it's highly recommended to get mores nodes. With that being said, I do run a 3-node full-mesh broadcast bonded 1GbE Ceph Quincy cluster on 14-year old servers using 8xSAS drives per node (2 of them are used for OS boot drives using ZFS...
  5. J

    Shared SAS Storage and choice fs

    May want to read this https://forum.proxmox.com/threads/2-node-cluster-w-shared-disc.109269 If you really want a 2-node "shared" storage, can use ZFS replication and use a 3rd non-cluster RPI/VM/PC as a QDevice for quorum. I run a full-mesh broadcast bonded 1GbE 3-node Ceph cluster on 14-year...
  6. J

    Help - CPU choice for proxmox

    It's true that Xeon E5's are EOL but functionally there is nothing wrong with them. I use E5v4-L Xeons in production on both Proxmox and VMware.
  7. J

    Local disk vs CEPH for clustered applications?

    It's true that local storage is faster but I use Ceph straight up for VMs. That includes apps that do their own replication. No issues.
  8. J

    Proxmox Offline Mirror released!

    Just created a Proxmox Offline Mirror instance. I've noticed the setup wizard for creating a Ceph mirror does not include an option to mirror the Quincy release of Ceph. How can I manually create it?
  9. J

    Ceph disk planning OSD journal

    May want to use that SSD also for the Ceph DB as well. Will help with writes to the SAS drives. The SSD is enterprise class, correct? Ceph eats consumer SSDs like nobody's business. Here is my Ceph VM optimizations I use: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none...
  10. J

    New to Proxmox, need a new server

    Proxmox runs on top of Debian, so if it's supported on Debian, it should work on Proxmox. Proxmox will definitely burn out any flash storage (that includes SD and consumer SSD) if it's not enterprise grade. I use 2 x SAS HDDs for Proxmox OS itself being mirrored by ZFS RAID-1. Then use...
  11. J

    1 server, 3 GPU's - is it possible?

    This post is for Intel iGPU. Maybe it can give you hints for AMD https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/
  12. J

    We want to buy new hardware. What's best ?

    Since Proxmox runs on top of Debian and not a proprietary kernel, I'm partial to Supermicro blade servers. Can't get more generic than that. More info at https://www.supermicro.com/en/products/blade May also want to check out their Twin series of servers which supports nodes.
  13. J

    [SOLVED] Imported ESXi CentOS VM does not boot

    For RHEL and derivatives, on the original VM run the following: # dracut --force --verbose --no-hostonly
  14. J

    1 server, 3 GPU's - is it possible?

    This may help but then again the GPUs are discrete https://www.youtube.com/watch?v=pIdCV1H1_88
  15. J

    [SOLVED] Kernel panic installing rocky or almalinux

    Yes, it's safe to change it. I believe you have to shutdown the VM, change its CPU type, and power it back on.
  16. J

    [SOLVED] Kernel panic installing rocky or almalinux

    If the nodes in your cluster have the same CPU family type, live migration should work with the VM CPU type set to 'host'. For example, I can live migrate between a R720 and R820 because they both have Sandy Bridge CPUs.
  17. J

    Can I move a CEPH disk between nodes?

    Can you tolerate downtime? Just be best to backup the VMs (with PBS perferably) and data and re-install PVE with Ceph Quincy.
  18. J

    Proxmox7 supported raid controller

    It should since the H330 uses a LSI 3008 SAS chip.
  19. J

    KVM to Proxmox convert initramfs failed boot centos7

    For RHEL and derivatives, on the original VM run the following: # dracut --force --verbose --no-hostonly The above was required to migrate from ESXi to KVM. Should work for KVM to KVM.
  20. J

    [SOLVED] Kernel panic installing rocky or almalinux

    The root cause of this issue is the compilation of RHEL 9 and it's derivatives to use the x86-64-v2 instruction set https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level#background_of_the_x86_64_microarchitecture_levels More...