Hello,
Today we have decided to upgrade our proxmox from version 7 to version 8.
After upgrading the server has been blocked giving those messages of the screenshot.
We have tried to start in recovery mode and the server stays in the title step and does not continue.
Hello,
This is our installation:
- 3 Supermicro servers (https://www.supermicro.com/en/products/chassis/2U/825/SC825TQ-R720LPB)
- Each server has two 4TB Western Digital Red HDD disks.
- We use Ceph for which we have a 10G fiber optic network with a cisco switch.
We are new with proxmox/ceph...
Hi,
We are running proxmox 6.4-15 and today we are working to upgrade it to 7.X. We have 3 servers with ceph.
First we upgraded Ceph to Octopus and now we realized that one VM with Windows 10 is not starting. This is the VM config:
agent: 1
balloon: 8192
boot: cdn
bootdisk: virtio0
cores: 4...
I found the problem!. The fact is that the server is a HP Microserver Gen8 and I forgot that I have a USB hard drive with a grub to make boot from a ssd. I restored it and now is working ok
➜ ~ pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8...
and here the output from find /boot
is larger than 1500 rows so I did a pastebin https://pastebin.com/aPKsi5mR
This is the /boot/grub/grub.cfg => https://pastebin.com/Y3N2CzZc
➜ ~ proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.
here it is
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo...
I executed
➜ ~ efibootmgr -v
EFI variables are not supported on this system.
So if I understand correctly it means that Grub is being used. Now, how can I fix my problem?
I try update-grub and reboot but still not working
Hi,
I just upgraded my proxmox 6 to 7.1 without errors but now no one of my VM is starting. I see "Booting from Hard Disk...." and is stuck there.
I just created a new VM (next next next) with Ubuntu iso and happens the same. this is the vm config:
boot: order=scsi0;ide2;net0
cores: 2
ide2...
Hi,
This is our setup:
3 Servers with proxmox 6.3.3 in HA and Ceph. For Ceph's corosync we have a dedicated VLAN and 10G switch.
All our VM disks are in the ceph shared.
The problem is that suddenly some vm-s detect filesystem errors and mount it as readonly. We have to manually fsck the disk...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.