Search results

  1. spirit

    Proxmox fencing

    I can confirm that I had tried with Dell idrac some year ago , and I had a lot of bug with unexpected reboot. Never had problem with softdog. (The only benefit was to have log of the watchdog in the idrac log)
  2. spirit

    Duplicate iptables entries after ifreload -a in NAT table with SDN, SNAT enabled

    yes, it's a known bug, snat rules management should be improve. (currently they are put as post-up in /etc/network/intefaces , so applied multiple times). It should need some kind of service to manage it. (maybe in new proxmox firewall for example) you can add a ping in this opened issue...
  3. spirit

    How to recover VM networking when I change /etc/networking/interfaces

    mmm, it's possible that some value are not applied at reload. btw, "static auto" is invalid, it's static or auto ifupdown is able to detect inet|inet6 or static|auto automatically, so it should work if you put accept_ra && autoconf 1 directly in main vmbr0 you can do : "ifreload -a -d"...
  4. spirit

    SDN with evpn seems to work, but need help to understand routing...

    the flow is like : vm(103.204.193.X)---->vnet(103.204.193.1)------- default route to exit-node ----->proxmoxnode01(exit-node)---------default gw------------->upstream routeur then in the reverse direction upstream routeur-----route to 103.204.193.0/24 gw "proxmoxnode01 ip"------------------>...
  5. spirit

    How to recover VM networking when I change /etc/networking/interfaces

    do you have done a reload or a restart of networking service after change ? (a reload shouldn't break you vm, but a restart with detach all vms interfaces, as they are not defined in /etc/network/interfaces)
  6. spirit

    Compatibility Matrix

    yes sure, you need to expose a big lun on both side, configure multipath, and add a lvm shared storage on top. (No snapshot && no thin provisioning yet, it should be available in pve9) Just be carefull, you need a third node to keep quorum (proxmox cluster is 3 nodes minimum), don' t need to be...
  7. spirit

    [TUTORIAL] Proxmox VE 7.2 Benchmark: aio native, io_uring, and iothreads

    I don't known what kind of cpu redhat is using, but it's really scaling across multiple threads (around x2~x4 from baselineà. Personnally, I really want to test it on ceph rbd, because it's currently really cpu limited client side around 70k iops. I think that they are still other improvement...
  8. spirit

    [TUTORIAL] Proxmox VE 7.2 Benchmark: aio native, io_uring, and iothreads

    Yes, I known this can increase latency. I need to look to add an option to pin iothread on specific cpu, like for the vm cpu cores. (ideally iothread have dedicated cores and vm cpus other cores, on same numa node than nvme drive) https://vmsplice.net/~stefan/stefanha-kvm-forum-2024.pdf...
  9. spirit

    NetApp & ProxMox VE

    proxmox don't use storage snasphots for backups. (only vm snapshot really need storage snapshots)
  10. spirit

    NetApp & ProxMox VE

    I have wrote a plugin 10 year ago when I was using netapp, (using flexclone,snapvol,..) ,it was nfs based, if somebody want to take some inspiration. (not sure that the netapp sdk is still ok) https://github.com/odiso/proxmox-pve-storage-netapp
  11. spirit

    Snapshot removal "jams" the VM

    not yet available, I'm targetting pve9.0. (we need some big other internal qemu block management change before it can be done)
  12. spirit

    PVE Cluster with Netapp storage

    pvesm set <storage> --options options vers=4.1,nconnect=8
  13. spirit

    PVE 8.4 pvescheduler cfs-lock timeout all cluster node

    10-20ms between both rooms ? with 20 servers nodes, it's seem quite huge for me to have a stable cluster. (and with nics around 60-80% , if you have some little spikes, it can hurt easily the cluster). https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network_requirements "The...
  14. spirit

    PVE Cluster with Netapp storage

    Personnally, If you use netapp, you should use nfs instead iscsi. (you'll have thinprovisioning and snasphots)
  15. spirit

    proxmox nvme disk performance very slow

    currently, it's possible that you are cpu limited, as 1disk in a vm can only use 1core for input/ouput. since qemu 9.0, it's possible to have multithreading to use multiple cores by disk, but it's not yet implemented in pve.
  16. spirit

    Mellanox ConnectX-6 (100GbE) Performance Issue – Only Reaching ~19Gbps Between Nodes

    how do you test ? if you use iperf3, be carefull that it's not multithreaded (even if you use -P option for multiple streams), until you use a really really recent version of iperf3, so you could be cpu/core limited. (just do a top to verify). iperf2 is multithreaded if you want to compare
  17. spirit

    Ceph reading and writing performance problems, fast reading and slow writing

    Hi, not yet implemented in proxmox, but https://docs.ceph.com/en/pacific/rbd/rbd-persistent-write-back-cache/ It's a local cache on the node where the vm is running
  18. spirit

    Fibre Channel (FC-SAN) support

    yes, snapshots on san are coming (with qcow2 on top of lvm, like ovirt indeed) . I'm working on it, I hope to finish it for pve9.