Search results

  1. N

    "Hybrid" pool understanding

    just add test repo,there is zfs 2.4
  2. N

    Call for Evidence: Share your Proxmox experience on European Open Digital Ecosystems initiative (deadline: 3 February 2026)

    Even though I am in Europe not in EU, this is a great initiative. And I agree on Proxmox influence on virt services.
  3. N

    Proxmox Virtual Environment 9.1 available!

    ZFS 2.4 is in test atm. And it is working good at >5 machines atm.
  4. N

    Review appreciated/best practise: new pve environment with ceph

    My recommendation is to use physical links to corosync, not vlan ones. As for the other things this is okay, all links should be active-passive in proxmox, so that you don't care if something dies ,and this is it. CEPH will work great in that case, those ssds work really great in practice.
  5. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Then okay, give them one disk(ceph) for os and one for VM and that is that. No raid inside and other bs.
  6. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    I mean it makes sense if you sell something like proxmox with 2-4cpus and 64gb of ram, so that you can spin 20 pms
  7. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Okay you offer customers an hypervisor. It makes sense, not much but it does. I would always use CEPH for that.
  8. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    It is okay offering, i see there is financial reason for it. Unfortunately only nested proxmox is an option. But i cannot give you performance loss. Maybe install os on raid somewhere and add 2xnvme as passthrough for vmdata.
  9. N

    ZFS pool usage limits vs. available storage for dedicated LANcache VM

    He is already using ssds for it's pool. You can add mirrors to mirrors ,effectively getting raid10.
  10. N

    [SOLVED] Weird RAID configurations for redundancy

    Maybe in paranoid case you can go with raidz3, so that 3 disks can die? And set up spare .
  11. N

    VMware user here

    I've worked with a few companies who migrated huge load of RDSses so my recommendation is start with 3-node CEPH, 10g networking and work from that. Go up to let's say 10-15 nodes, then create new cluster. No problem with that.
  12. N

    Kernel 6.17 bug with megaraid-sas (HPE MR416)

    I circumvent that on supermicro with shutting down all vm and ct and then updating/upgrading everything. Try that.
  13. N

    Hardware requirements or recommendations for PDM ?

    Usually with NMS or monitoring systems in big support companies, you have one machine outiside of everything(different power,switch and 3g modem usually) so that when anything or everything dies you get notifications etc. If you are maintaining more than 10-20 clusters than it makes sense to...
  14. N

    RSTP on Switch with Proxmox

    No,didnt need that on my end.
  15. N

    RSTP on Switch with Proxmox

    here is how i do it in ex2200: ge-0/0/21 { description SP1-data; unit 0 { family ethernet-switching { interface-mode trunk; vlan { members [ Server-Vlan Host-Vlan Voice-Vlan Wifi-Vlan ];
  16. N

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    I had similar problem with megaraid_sas, the zfs 1 boot disks couldnt write to it if the machine load was high. Once i've shutdown the Vms on it, the kernel upgrade or proxmox-boot-tool would run okay. this was supermicro.
  17. N

    Proxmox cluster load high

    Usually very high load shows problems with storage, eg disks cannot write fast enough. First look at disks.