Search results

  1. N

    High latency on proxmox ceph cluster

    Ooh, consumer crucial, not good, not good.
  2. N

    enable CONFIG_RUST for Proxmox kernel

    But for now bcachefs works great with 6.17 kernel
  3. N

    Asymetric cluster with Ceph; best practise for quorum

    Why are you putting those two nodes in the same cluster as those before? Why not just for starters, create them in separate clusters and work with that?
  4. N

    Shared Remote ZFS Storage

    Why you have external help? When External Audit comes(Grant Thornton,KPMG etc etc) first they ask you is "what if you get hit by a bus", who will support that and that. That is why you need to have external maintenance contracts.
  5. N

    [SOLVED] Clarification request – SAN FC shared storage, Proxmox snapshots and Veeam backup compatibility

    Ceph-replication,zfs-replication. Or just backup copying onto DRS site. What is RTO/RPO?
  6. N

    Simple concept for manual, short term use failover without significant downtime?

    You mention failover, and in that regard, it makes sense to use Proxmox <> Proxmox pve-zsync replication. Both proxmoxes have zfs storage for vm and cts and you implement it on one machine. Failove then has moving .conf files and starting the machines on replicated node.
  7. N

    ZFS 2.4.0 Special Device + Data VDEV Report Very Wrong Remaining Space

    What is the use case for using zfs and disks in this way? This is not recommended way, if i have only two disks this way i would use bcachefs.
  8. N

    Simple concept for manual, short term use failover without significant downtime?

    You are literally speaking about snapshot replication, by zfs or any other FS. So just setup pve-zsync to another machine and it's done.
  9. N

    Shared Remote ZFS Storage

    Storage vendors can support anything they want,because you pay them to support it. But if you open a ticket with mdadm corruption, will they fix it?
  10. N

    3 node cluster zfs & replication

    1. Yes it is because you don't have shared storage for all your nodes, which is usually ceph. 2. There are some aftermarket scripts to replicate whole pool, but you usually want to replicate different machines with different schedules, eg dbs in 1-5m, app servers in /15 min,etc etc. If you...
  11. N

    Low perf IO VM Windows on Micron 7400

    How are you testing inside windows for performance, and your windows vm configuration, which cpu type is it?
  12. N

    PVE 9.1.5 break NFS Mount Points to LXC (jellyfin)?

    https://forum.proxmox.com/threads/proxmox-9-1-5-breaks-lxc-mount-points.180161/
  13. N

    "Hybrid" pool understanding

    just add test repo,there is zfs 2.4
  14. N

    Call for Evidence: Share your Proxmox experience on European Open Digital Ecosystems initiative (deadline: 3 February 2026)

    Even though I am in Europe not in EU, this is a great initiative. And I agree on Proxmox influence on virt services.
  15. N

    Proxmox Virtual Environment 9.1 available!

    ZFS 2.4 is in test atm. And it is working good at >5 machines atm.
  16. N

    Review appreciated/best practise: new pve environment with ceph

    My recommendation is to use physical links to corosync, not vlan ones. As for the other things this is okay, all links should be active-passive in proxmox, so that you don't care if something dies ,and this is it. CEPH will work great in that case, those ssds work really great in practice.
  17. N

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Then okay, give them one disk(ceph) for os and one for VM and that is that. No raid inside and other bs.