i restored some VMs with older backups and that "solved" the issue, as theyre now booting up again.
I'm still curious how exactly the upgrade (or whatever else happened at this time) broke the VM disks because the VMs were 100% working before i powered them off for the upgrade. Hopefully it...
Yeah I think I have an older backup, I could try that tomorrow when I’m back in the Office
about the command, no i just changed the filename (?) to vmdisk 106-0
But it‘s still weird, it‘s like the new version of qemu isn’t compatible with the „old“ disk format or something
Yep, i had to change your command slightly but i got the vmdisk 106:
root@top:~# lsof /dev/mapper/PLS-vm--106--disk--0
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
fdisk 281530 root 3u BLK 253,8 0t2098176 604 /dev/mapper/../dm-8
No, there were not any changes to the storage...
Whats also strange, a friend has the exact same issues with PVE community edition and local storage after a Proxmox 6 --> 7 in place upgrade. So my setup here is is the second one i see with this behavior
root@top:~# lvs
LV VG Attr...
weirdly enough, the one VM i used all the time (VMID 100) does not show up here (see dev-mapper.txt).
But when I tried to list the partitions for VMID 106 (Ubuntu VM with the same symptoms as described throughout this thread, just on another node i also upgraded from 6 --> 7) i get the output...
i live-booted into a debian shell from the netinstall iso and according to fdisk (sgdisk wasnt on board), there are no recognized partitions on this disk
Should i try again with the "proper" debian live image and sgdisk?
Update: after some external storage and nework tweaking i could finally make a backup. Restoring a new vm disk with this backup resulted in another "no bootable Device found" message in the Proxmox bios on VM startup.
Good morning again,
I started the backup of the VM towards the PBS yesterday and for some reason, the read speed is at maximum 4MB/s; mostly around a few hundred KB/s. The storage is an external iSCSI attached HP MSA 2040 disk shelf but all other disk intensive tasks like installing a new VM...
Yes, i was using jumbo frames and yes, using normal frames resolved the issue :D
The switch probably was not properly configured and causing the issue
Thanks a lot for the support, i'm marking this thread as solved.
Just checked all of that:
- iptables was not installed in the first place on PBS, therefore iptables-save did not yield any output
- The Cluster and the PBS is only connected via a Juniper switch and 10G DAC cables; the traffic is contained in a VLAN on this switch. There's also definitively no...
I migrated this VM to the one node that is able to connect to the PBS and running a backup now. it's excruciatingly slow; so maybe this one has issues as well :( I'll wait until it either finishes or fails.
Good morning,
I installed a new Debian VM yesterday and the install works fine, just like booting said install. So no problem there.
However, adding ",aio=threads" to the VM config did not help.
Attached is the VM config i case i did something wrong, and also a screenshot of the boot screen...
Yes i know, the test with ssh was a recommendation from someone to test connectivity in the first place. All nodes can ping each other and the PBS, but the two nodes that can not connect to the PBS can not ssh into it either. The node that works properly can also establish an ssh connection to...
Yes, a newly created VM does work and boot properly.
I attached the config of the new vm (new-vm-cfg.txt) and one of a not booting VM (broken-vm-cfg.txt).
The storage config is in the file (middle-storage-cfg.txt).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.