help with log

brucexx

Renowned Member
Mar 19, 2015
265
9
83
I had to pull two drives in raid1 array. They were not used and I could not reboot/stop the server to do this as I have tons of VMs on it. IO removed the LVM , lv and vg and storage fro0m the node before I pulled them out. Now I see in the log tons of:

kernel: blk_partition_remap: fail for partition 1

Does anybody have any idea how to stop it. It is logging like crazy.

Thank you
 
It's possible the Debian system backing Proxmox hasn't let go of the lvs vgs. Can you run `lvs` and `vgs` to see if the volumes/disks still show up on the Debian side despite being removed? If so, you might want to use LVM commands to remove them.
 
Thanks, forgot to remove the mapper and fstab entry. All good now.
I got this in fstab.
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=08C7-9AF7 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

No idea which is the entry for my USB HDD of 3TB. Moreover please guide how to remove from mapper. I am a beginner one. Regards