Search results

  1. K

    pvesh get /nodes/{node}/qemu & qm list very slow since Proxmox 9

    I updated the problematic hosts from 9 to 9.1, rebooted, and the problem went away.
  2. K

    pvesh get /nodes/{node}/qemu & qm list very slow since Proxmox 9

    Is it possible that this will be fixed in the future? We have a cluster of 2,000+ machines, and reports are now running very slowly (pvesh). QM list is taking a very long time to run on some servers.
  3. K

    zfs pool problem?

    I solved the problem. 1- Find the disk that's not in use in our pool using ls -n /dev/disk/by-id/ |grep nvme-eui 2- Moved the disk offline using zpool offline rpool 4507037677091464003 3- Add it to the pool as is, but it won't let me, complaining that the disk is already in the pool. 4- Remove...
  4. K

    zfs pool problem?

    After comparing the unique disk numbers, I found that /nvme0n1p3 was missing. nvme-eui.34595030529002970025384700000002-part3 -> ../../nvme0n1p3 I run a command and get a suspicious error zpool replace rpool 4507037677091464003 /dev/disk/by-id/nvme-eui.34595030529002970025384700000002-part3...
  5. K

    zfs pool problem?

    Hi I booted from a live CD and imported rpool to perform recovery operations on an already mounted partition. After that, when I booted into PVE, one of my drives crashed. Is the drive OK How do I correctly re-enable it in rpool and add an ID name like the other drives? After reading...
  6. K

    Problems after upgrading the cluster from 8 to 9

    Thanks, everything was resolved by updating to pve-manager/9.0.10 my nerves are saved :)
  7. K

    Problems after upgrading the cluster from 8 to 9

    I'll try to update tomorrow and I'll definitely report back on the results.
  8. K

    Problems after upgrading the cluster from 8 to 9

    pveversion pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.11-1-pve) journalctl -f Oct 02 12:21:31 cloud-p001 pveproxy[2091468]: proxy detected vanished client connection Oct 02 12:21:31 cloud-p001 pveproxy[2091468]: proxy detected vanished client connection Oct 02 12:21:31 cloud-p001...
  9. K

    Problems after upgrading the cluster from 8 to 9

    Hi. After upgrading the entire cluster (most of the machines haven't rebooted yet and are running the old 6.5 kernel), we encountered poor web interface performance. Everything started opening and running very slowly, and it was throwing "loading" errors, "broken pipe 596." I can't find the...
  10. K

    proxying requests for SPICE connection to vm?

    I solved my main problem with blocking to web proxmox using haproxy
  11. K

    proxying requests for SPICE connection to vm?

    I have a working proxy https://proxyprox-v001 -> https://cloud-v001:8006 I get to the web of the desired server, but I need to proxy the connection to the VM via the spice protocol cv4pve-pepper.exe" --host=proxyprox-v001 --username "$UserName@ldap_cloudp" --password $Password --vmid NAME-vm...
  12. K

    Proxmox Offline Mirror released!

    I completely forgot about this! thanks!
  13. K

    Proxmox Offline Mirror released!

    I downloaded this file and it is located in /etc/apt/trusted.gpg.d /usr/share/keyrings
  14. K

    Proxmox Offline Mirror released!

    root@proxmox-mirror:/home/rootuser# proxmox-offline-mirror mirror snapshot create pve_trixie_no_subscription Fetching Release/Release.gpg files -> GET 'http://download.proxmox.com/debian/pve/dists/trixie/Release.gpg'.. -> GET 'http://download.proxmox.com/debian/pve/dists/trixie/Release'...
  15. K

    Proxmox Offline Mirror released!

    hi what repositories need to be added to update to pve9?
  16. K

    pveproxy: failed to get address info

    I want to add that I had to set the rights to other folders! I will leave a screenshot of the rights of the root folders for those who may have one host and nowhere to quickly get the correct list
  17. K

    e1000 driver hang

    this problem exists on any kernels and pve version 100% solution to this problem is also not there
  18. K

    Proxmox api "Resources"

    everything is more complicated )) I never delved into it, but this bar above summed up ALL datastores including datastores from pbs, and I needed datastores for vm as a result, the requests look like this full volume of local datastores (lvm-thin and zfspool) storage=$(pvesh get...
  19. K

    corosync: OOM killed after some days with 100%CPU, cluster nodes isolated ("Quorate")

    does the cluster fall apart? do the nodes become unavailable? and also, the fact that you have no vm activity does not = that you have no activity inside corosync these are different things, transport traffic is always there and it is critical to minimal delays I say this with confidence because...