Ceph GUI has corrupted drive

yyyy

Member
Nov 28, 2023
68
2
8
hello

I initially used ceph to create an osd for sda drive but then I stopped the drive and deleted the osd, now,
for some reason when running mkfs to format the sda drive it keeps showing as busy even though no processes are using this disk.

root@pve:~# mkfs.xfs /dev/sda
mkfs.xfs: cannot open /dev/sda: Device or resource busy

Note that umount also shows that sda is not mounted.

Also when I try to wipe the disk using the GUI I see this:

disk/partition '/dev/sda3' has a holder (500)

Furthermore trying to wipe the sda1 partition shows this: error wiping '/dev/sda1': dd: invalid number: '0.9833984375'


GDISK is also showing no partitions? but lsblk shows sda1,2,3:
root@dws-pve:~# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present

Creating new GPT entries in memory.

Command (? for help): d
No partitions

Note also that I’ve restarted and same issue persists
 
What does mount show? Are there really no remnants left (e.g. the daemon itself)?
 
What does mount show? Are there really no remnants left (e.g. the daemon itself)?
Here’s the output for mount though I don’t see any sda in the output,

root@pve:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=131999504k,nr_inodes=32999876,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=26406600k,mode=755,inode64)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,posixacl,casesensitive)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=59007)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
rpool on /rpool type zfs (rw,relatime,xattr,noacl,casesensitive)
rpool/var-lib-vz on /var/lib/vz type zfs (rw,relatime,xattr,noacl,casesensitive)
rpool/data on /rpool/data type zfs (rw,relatime,xattr,noacl,casesensitive)
rpool/ROOT on /rpool/ROOT type zfs (rw,relatime,xattr,noacl,casesensitive)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=26406596k,nr_inodes=6601649,mode=700,inode64)
 
What happens if you simply use the "Wipe Disk" function directly via PVE?
 
Update: I’ve solved the issue by running fswipe, gdisk /dev/sda and fdisk /dev/sda and deleting all partitions and then making sure to reboot the computer, once rebooted the drive should be usable again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!