Search results

  1. spirit

    Snapshot removal "jams" the VM

    not yet available, I'm targetting pve9.0. (we need some big other internal qemu block management change before it can be done)
  2. spirit

    PVE Cluster with Netapp storage

    pvesm set <storage> --options options vers=4.1,nconnect=8
  3. spirit

    PVE 8.4 pvescheduler cfs-lock timeout all cluster node

    10-20ms between both rooms ? with 20 servers nodes, it's seem quite huge for me to have a stable cluster. (and with nics around 60-80% , if you have some little spikes, it can hurt easily the cluster). https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network_requirements "The...
  4. spirit

    PVE Cluster with Netapp storage

    Personnally, If you use netapp, you should use nfs instead iscsi. (you'll have thinprovisioning and snasphots)
  5. spirit

    proxmox nvme disk performance very slow

    currently, it's possible that you are cpu limited, as 1disk in a vm can only use 1core for input/ouput. since qemu 9.0, it's possible to have multithreading to use multiple cores by disk, but it's not yet implemented in pve.
  6. spirit

    Mellanox ConnectX-6 (100GbE) Performance Issue – Only Reaching ~19Gbps Between Nodes

    how do you test ? if you use iperf3, be carefull that it's not multithreaded (even if you use -P option for multiple streams), until you use a really really recent version of iperf3, so you could be cpu/core limited. (just do a top to verify). iperf2 is multithreaded if you want to compare
  7. spirit

    Ceph reading and writing performance problems, fast reading and slow writing

    Hi, not yet implemented in proxmox, but https://docs.ceph.com/en/pacific/rbd/rbd-persistent-write-back-cache/ It's a local cache on the node where the vm is running
  8. spirit

    Fibre Channel (FC-SAN) support

    yes, snapshots on san are coming (with qcow2 on top of lvm, like ovirt indeed) . I'm working on it, I hope to finish it for pve9.
  9. spirit

    ZFS rpool incredible slow on Mirrored NVMEs

    don't use zfs or ceph on consumer ssd/nvme without plp , performance will always be bad.
  10. spirit

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    For this specific case, I think this is normal to have random slow ops error, as your pg and replicats can be on different slorage speed. (so, a primary write on fast sdd, will always wait for replicat on slow hdd), and for read, it's really russian roulette. (Personally, I'll do 2...
  11. spirit

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    you have mixed ssd and hdd in the same pool ???
  12. spirit

    VLAN-Aware VNet on a EVPN Zone

    ok make sense. I'll try to see if we could implement it.
  13. spirit

    Proxmox VE 8.4.1 Windows IOPS/Performance Slow

    maybe you are cpu limited in the vm (currently 1disk can use only 1core with iothread). BTW, cache=writeback don't make too much sense with local nvme. It can even slowdown result. better to use cache=none.
  14. spirit

    VLAN-Aware VNet on a EVPN Zone

    the main reason is that current evpn implementation is done to use the anycast gateway distributed (and do l3rougint)on each node, but I'll not work if you use vlan behind. (Or maybe do you plan to use evpn to transport L2 only witout any routing ?)
  15. spirit

    [SOLVED] Migrating Rocky Linux 9 from Hyper-V to Proxmox fails

    https://pve.proxmox.com/wiki/Serial_Terminal#Add_a_virtual_serial_port_to_the_VM (then you can use the alternative xterm console, where copy/paste is working). but you can't have display, until you configure some config in the guest
  16. spirit

    [SOLVED] Migrating Rocky Linux 9 from Hyper-V to Proxmox fails

    xterm console allow copy paste, but you need to redirect output first to the serial. (maybe copy/paste could work blindly, no sure)
  17. spirit

    After Renaming Node Everything is Gone or Won't Start

    you should first restart pve-cluster service, to mount the /etc/pve directory . (it's used /var/lib/pve-cluster/config.db as backend, and expose it like a fs in /etc/pve)
  18. spirit

    Maintenance on a large part of cluster

    mv /etc/pve/ha/resources.cfg /tmp/ ;) then move back the file again Note than you need to close watchdog too to avoid fencing, the currently only is it to 1) stop pve-ha-lrm on all nodes, node by node 2) stop pve-ha-crm on all nodes, node by node then, do the reverse when on have...
  19. spirit

    [SOLVED] Migrating Rocky Linux 9 from Hyper-V to Proxmox fails

    I have see some similar problem with migration from vmware, seem than rocky don't have virtio drivers by default in initramfs when it's deployed first on different hypervistor maybe try to boot with ide, then sudo dracut -f --regenerate-all --add-drivers "virtio_blk virtio_scsi virtio_net"...
  20. spirit

    Maintenance on a large part of cluster

    shouldn't be a problem, but if you use HA, you really to disable it before.