Recent content by ness1602

  1. N

    VMware user here

    I've worked with a few companies who migrated huge load of RDSses so my recommendation is start with 3-node CEPH, 10g networking and work from that. Go up to let's say 10-15 nodes, then create new cluster. No problem with that.
  2. N

    Kernel 6.17 bug with megaraid-sas (HPE MR416)

    I circumvent that on supermicro with shutting down all vm and ct and then updating/upgrading everything. Try that.
  3. N

    Hardware requirements or recommendations for PDM ?

    Usually with NMS or monitoring systems in big support companies, you have one machine outiside of everything(different power,switch and 3g modem usually) so that when anything or everything dies you get notifications etc. If you are maintaining more than 10-20 clusters than it makes sense to...
  4. N

    RSTP on Switch with Proxmox

    No,didnt need that on my end.
  5. N

    RSTP on Switch with Proxmox

    here is how i do it in ex2200: ge-0/0/21 { description SP1-data; unit 0 { family ethernet-switching { interface-mode trunk; vlan { members [ Server-Vlan Host-Vlan Voice-Vlan Wifi-Vlan ];
  6. N

    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

    I had similar problem with megaraid_sas, the zfs 1 boot disks couldnt write to it if the machine load was high. Once i've shutdown the Vms on it, the kernel upgrade or proxmox-boot-tool would run okay. this was supermicro.
  7. N

    Proxmox cluster load high

    Usually very high load shows problems with storage, eg disks cannot write fast enough. First look at disks.
  8. N

    Active directory PDC - Best backup practices ?

    I do it all the time for my customers, but what WIndows version do you have?
  9. N

    LizardFS anyone?

    What is the use-case for SaunaFS in Proxmox, for image storage like CephFS or do you run VM's on it? Since you are using sata drives, did you run some fio tests or similar to compare the perf to ceph?
  10. N

    is replication between two servers possible?

    In that case best you can do is use zfs on both nodes, and pve-zsync.
  11. N

    Storage Type for Splunk Infrastructure on Proxmox

    If you are using hardware raid, then you format it as lvm-thin by default(i usually do). Then you create Splunk on top of that. But why raid5 instead of raid10, much more IOPS you get from that.
  12. N

    Suggestions for low cost HA production setup in small company

    CEPH is only real HA that is best-in-class supported in Proxmox, so i would always choose that.
  13. N

    Ceph DB/WAL on SSD

    i'm working with 5 nodes and EC pool, so i would say atlest 5 but maybe it doesnt make that much sense.
  14. N

    PBS on a non-dedicated LAN workstation

    If you machine is debian 13 than okay, why not.