Search results

  1. U

    Internal error: Reserved memory not enough during snapshoting an offline VM

    Thanks, curiously enough is reserved_memory only 8k! lvmconfig activation/reserved_memory reserved_memory=8192 I've extend to 64k. Best regards Udo
  2. U

    Internal error: Reserved memory not enough during snapshoting an offline VM

    Hi, we had an error massage during snapshotting an switched off VM: snapshotting 'drive-scsi0' (ssd-lvm:vm-1117-disk-0) Logical volume "snap_vm-1117-disk-0_xxx" created. Internal error: Reserved memory (31006720) not enough: used 31928320. Increase activation/reserved_memory? pveversion...
  3. U

    [SOLVED] after dist-upgrade Boot hang with "EFI stub: Loaded initrd from command line option" and rescue boot don't support raidz2?

    Status update I've test a lot: new Installation on a single disk - no luck many different bios settings - no luck kernel 6.14 - no luck But the bios show that the connection to the BMC (ipmi) aren't working - and the lom are not reachable. After new flashing the BMC-Firmware with socflash...
  4. U

    [SOLVED] after dist-upgrade Boot hang with "EFI stub: Loaded initrd from command line option" and rescue boot don't support raidz2?

    Hi, after an distupgrade to pve 8.4 with kernel 6.8.12-11 on my private server (asrock X470D4U with AMD Ryzen 7 2700 CPU) the system don't boot anymore. It's freezed after "EFI stub: Loaded initrd from command line option" and i can only do an power cycle. Rescue boot don't reconice the raidz2...
  5. U

    offline storage migration destroy vm-disk for one VM

    Thanks for the answer. But strange, that the nine working VMs has the same partiion-numbers, because all created from the same template… Udo
  6. U

    offline storage migration destroy vm-disk for one VM

    Hi, we have an strange effect. On an new migrated VM to an pve8-node, the offline migration of the VM-disk from ceph to local-zfs destroy the target disk. The VM arent boot, because the partitiontable are gone and only the content of /boot are on sda. But we do the same with nine VMs before, and...
  7. U

    Wo sind 20 TB hin?

    sind wahrscheinlich die snapshots?! Am besten die Ausgabe von folgenden Befehlen posten pvs vgs lvs Übrigens halte ich Raid-5 mit 10 16TB-Disk für äußerst gefährlich. Die Chance das bei einem Rebuild die nächste Disk ausfällt ist nicht so gering… Udo
  8. U

    [SOLVED] Cluster lost qourum

    Hi, why you shutdown all VMs and Nodes? corosync and pve-cluster restart can you do while the VMs are still running - without downtime. Udo
  9. U

    help with my proxmox it stays hanging

    Take a look here: https://pve.proxmox.com/wiki/Category:HOWTO There are a section for each version update. And read them carefully! I guess, your disks run some time. Perhaps you should get new ones and install an new version and then import (from backup) your old VMs… If anything don't work...
  10. U

    [SOLVED] Cluster lost qourum

    BTW. the other node can't be have the same output name": "DS-RR", "version": 17, "nodes": 9, "quorate": 0 } the question is, if on all other nodes only ds4-node02 offline? and all have the same version?
  11. U

    help with my proxmox it stays hanging

    you should allways have an backup of your VMs!! Normaly VMs should work after node-update without issues. But it's possible, that you must change something - depends on your setup. Udo
  12. U

    help with my proxmox it stays hanging

    But be aware, that your M-disks in sum are greater then the space on /dev/pve/data - so an extension would be nessessary. And you should update your system! Udo
  13. U

    [SOLVED] Cluster lost qourum

    Hi, if all nodes (except ds4-node06) shows the same, then you can restart cororsync and after that pve-cluster on ds4-node06. And look with "corosync-cfgtool -s" on the nodes. Udo
  14. U

    help with my proxmox it stays hanging

    Hi, your logical volume data: data pve twi-aotzD- 810.75g 100.00 48.49 are full (100%). Because data has 816GB space, but you have vm-disks with 1.37t + 1.5t and so on… You can add another disk to the volumegroup and extend /dev/pve/data - then the VMs...
  15. U

    [SOLVED] Cluster lost qourum

    Hi, how looks "pvecm status" and the content of /etc/pve/.members on the other nodes? Udo
  16. U

    help with my proxmox it stays hanging

    Oh, an historical version of pve! Is there an full filesystem? what is the output of df -h free vgs lvs Udo
  17. U

    [SOLVED] VMs freeze with 100% CPU

    Yes, the kernel at this time is the same for no-subscription and enterprise.
  18. U

    [SOLVED] VMs freeze with 100% CPU

    I've marked this thread as solved, due 6.2.16-11-bpo11-pve solved the issue. Hope the cherry picking was not forgotten in further kernels. Udo
  19. U

    [SOLVED] VMs freeze with 100% CPU

    The day before yesterday, the package was in pvetest only. Download and install: wget http://download.proxmox.com/debian/pve/dists/bullseye/pvetest/binary-amd64/pve-kernel-6.2.16-11-bpo11-pve_6.2.16-11~bpo11%2B2_amd64.deb dpkg -i pve-kernel-6.2.16-11-bpo11-pve_6.2.16-11~bpo11+2_amd64.deb Udo
  20. U

    Nach Migration von VMware zu PVE: SLES12SP4 Reihenfolge der Platten (Linux) undurchsichtig

    Du kannst entweder die Disk über die gui detachen und dann erneut hinzufügen. Dann ist es scsi0 (wenn es die einzige disk ist). Dann must Du natürlich die disk auch wieder als boot-device aussuchen. Oder einfach kurz zu Fuß editieren ( vi /etc/pve/qemu-server/103.conf ) - darf halt nur eine Disk...