“Primary GPT table is corrupt” error on (some) restored containers

TheForumTroll

Member
Nov 21, 2020
29
0
6
47
Hello experts :)

I just upgraded to a new server and I'm currently restoring containers. Some of them have GPT corruption warnings.

Example:

Bash:
Disk /dev/mapper/pve-vm--100--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 524288 bytes
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Bash:
Disk /dev/mapper/pve-vm--103--disk--0: 6 GiB, 6442450944 bytes, 12582912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 524288 bytes
Disklabel type: gpt
Disk identifier: A3C0754C-B54B-4A9E-B51D-5668C0617CFB
The backup GPT table is corrupt, but the primary appears OK, so that will be used.


The physical disks do not have any errors. What could cause this?
 
do the original containers still exist ? (to have a look there)

did you try checking filesystems ?

what does gdisk -l /dev/path tell ?
 
I'm sorry, I cut out too much of the code for it to make sense. Yes, it is indeed fdisk -l that gives this warning. No, the original server with the containers are no longer around, and no, there are no errors on the actual disks.

did you try checking filesystems ?

On the physical disks, yes, on the CTs themselves, no.


So, I just tried restoring the containers over again and this time around they had no errors. Before that I did use gdisk and allowed it to "fix" them, which was a bad idea as Proxmox then saw the boot size on the containers as "64 ZiB" and both CTs were dead. I'm no expert but I guess that was a bad move. Any ways, no errors now, so now I'm even more worried, as a bad backup from an old server is one thing but a seemingly good backup restoring badly some of the time is a headache. But... is the warning something to worry about? They seem to run just fine and I can't find any warnings with any other tools.
 
err - since when do container volumes have partition table ?

can't try lvm to have a look, as i have only zfs, but when i create a raw container disk, it does not have partition table

root@s740:/rpool/data/images/131# fdisk -l vm-131-disk-0.raw
Disk vm-131-disk-0.raw: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

# file vm-131-disk-0.raw
vm-131-disk-0.raw: Linux rev 1.0 ext4 filesystem data, UUID=c2fea369-b6c0-452f-a810-231cc697920d (needs journal recovery) (extents) (64bit) (large files) (huge files)


so i'm curious of lvm volumes for CT really have partition table - and why
 
That is way over my head, but I can add that I created all the containers in the Proxmox 7 and 8 web UI guide - including the root disks - so what you see is made by Proxmox by default.
 
it's like i suspected, proxmox does not create partition table on container disks, at least not for the container i created for testing

it must be operating error, otherwise i have no clue why fdisk shoud detect a corrupted primary partition table and a valid backup one.

Logical volume "vm-101-disk-0" created.
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: a28d4c5e-85c1-4325-9edc-e5a951ef3afc
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
extracting archive '/data/template/cache/debian-11-standard_11.7-1_amd64.tar.zst'
Total bytes read: 490127360 (468MiB, 144MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
done: SHA256:nr66qSHogFIANCsSXgeRwJrzbGVlZ8KBuofj6CqXq4c root@testtest
Creating SSH host key 'ssh_host_dsa_key' - this may take some time ...
done: SHA256:0APgtQDkQfd0KlL4g+rfrZq4MshXr+hqAQ/mYgh3ICU root@testtest
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
done: SHA256:ew/fh/+SjTgtURClP/rSlLckZMGrNfPwvLUo/DY2+Vs root@testtest
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
done: SHA256:c204pJ9q1y0hJ8RFA1oGpUjb+BdlX6+NKa5OHUmJCC4 root@testtest
TASK OK


#fdisk -l

Disk /dev/mapper/pve-vm--101--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


# file -s /dev/mapper/pve-vm--101--disk--0
/dev/mapper/pve-vm--101--disk--0: Linux rev 1.0 ext4 filesystem data, UUID=a28d4c5e-85c1-4325-9edc-e5a951ef3afc (extents) (64bit) (large files) (huge files)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!