Search results

  1. N

    Proxmox NFS setup

    Yes, independent means one connection cannot saturate whole network.
  2. N

    Proxmox NFS setup

    Create two independent networks for SAN/NFS, one for data and one for backups. That is atleast start.
  3. N

    corosync show link flapping (down/up) about every 3-4 minutes, but switch shows no problem

    Did you try 7.0 kernel? sudo modprobe tcp_bbr sysctl net.ipv4.tcp_congestion_control On 7.0 i get: net.ipv4.tcp_available_congestion_control = reno cubic bbr
  4. N

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    First! Just kidding, tested with one physical atm, works okay. Linux SPS2 7.0.0-1-rc6-pve #1 SMP PREEMPT_DYNAMIC PMX 7.0.0-1~rc6+1 (2026-03-30T09:17Z) x86_64 GNU/Linux
  5. N

    Storage types and replication , NFS and local ZFS in a cluster

    CEPH in Proxmox is the endgame, but as they said,if you really don't need that type of HA(or even real HA), you could go with storage replication. It works good, manual failover is fairly quick and with backups you're good.
  6. N

    Applying pve-qemu-kvm 10.2.1-1 may cause extremely high “I/O Delay” and extremely high “I/O pressure stalls”. (Patches in the test repository

    I cannot imagine manager who said "let's use test repo", and then blames everyone else on that decision.
  7. N

    Quorum question - 5 Nodes over 2 data centers

    Yes, implement third DC with quorum device with like 3 votes can be effective.
  8. N

    Recommended hardware for modest upgrade of 3 PVE nodes

    For CEPH homelab you only need 2.5g,5g or at most 10g. So only calculate that, the cores and everything isnt that important for the homelab.
  9. N

    Can I specify PBS namespace in backup jobs?

    Well,if you store Netflow that much(multi-TB usually), then it makes sense that db is in a multiple VM's ,and then maybe you can create a different storage on a same PBS with a different namespace. Atleast that is how i do it with NFA.
  10. N

    ZFS: Need help with vdevs & pools across multiple HDDs with different capacity

    I would usually recommend something like mergefs or bcachefs in this case,certainly not zfs.
  11. N

    cross cluster migration with snapshots

    Yes,it basically copies state of ct or vm onto a different zfs pool, the docs are really good and simple.
  12. N

    Open-source Cloud Management Platform for Proxmox

    You probably shouldn't use proxmox logo,since it is a trademark.
  13. N

    Switch from Host to x86-64-v3

    If you need something like AfterEffects inside vm you can use max as cpu type.
  14. N

    Problem Windows Server 2022 + RDS + FSlogix

    For my customers we are using rds farm + fslogic on CEPH tbh, not on local disks. Try changing cpu type to something instead of host. Also maybe screenshots of load and io of the VM,maybe you have something like I/O stall?
  15. N

    Moving VMs with snapshots

    I use for some testing those nested snapshots(testing different versions of some app), and i haven't found a reliable way to migrate or backup those snapshots.
  16. N

    ZFS mirror on 2x Crucial T705 (PCIe 5.0) causing txg_sync hangs under write load – no NVMe errors in dmesg

    Yeah i have 2x these nvmes are depending on case and ventilation,they can go up to 90c and restart.
  17. N

    Storage for small clusters, any good solutions?

    Usually i don't agree with that,because i've had customers on 3-node CEPH/Proxmox for more than 3-4 years without a hiccup (not counting power errors,etc). And nowadays, with proxmox it is everything batteries included, only thing admin needs to know about it a little bit of linux,bit of...
  18. N

    Network Optimization for High-Volume UDP Traffic in PVE

    Usually i ask for netstat -sanu to see what can be optimized in kernel and system.
  19. N

    Storage for small clusters, any good solutions?

    I don;t agree that the ceph learning curve is steep. Read the docs, get the right network(i would say everything else but this is primary),and start working with it. There are rarely problems, if we count out physical layer problems.
  20. N

    High latency on proxmox ceph cluster

    Ooh, consumer crucial, not good, not good.