Proxmox secondary HDD formating problem

petardimic

New Member
Nov 17, 2024
4
0
1
Hello!

So, I have secondary HDD as part of my proxmox machine, this HDD was intended to be used for holding up backups but I was playing and testing proxmox and create some ZFS partition.

Now I want to format it and reconfigure that HDD to be used mainly for holding backup but when I try to wipe disk I get erorr tht wipefs failed, disk is busy, I did bunch of stuff, gone thru all proxmox config and managed to not use "hdd-zeus" anywhere but still problem. I tried to dd zeroes just to oerwrite stuff on it and still problem, now I get:

root@home-node-5500:~# mkfs.ext4 /dev/sda1
mke2fs 1.47.0 (5-Feb-2023)
/dev/sda1 is apparently in use by the system; will not make a filesystem here!
 
Sorry bbgeek!

Here are outputs!

Code:
root@home-node-5500:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=7045828k,nr_inodes=1761457,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1415932k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,stripe=32)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11288)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
hdd-zeus on /hdd-zeus type zfs (rw,relatime,xattr,noacl,casesensitive)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=1415928k,nr_inodes=353982,mode=700,inode64)
root@home-node-5500:~#

Code:
root@home-node-5500:~# df
Filesystem           1K-blocks     Used Available Use% Mounted on
udev                   7045828        0   7045828   0% /dev
tmpfs                  1415932     1600   1414332   1% /run
/dev/mapper/pve-root  98497780 60972196  32476036  66% /
tmpfs                  7079648    59640   7020008   1% /dev/shm
tmpfs                     5120        0      5120   0% /run/lock
efivarfs                   148       62        82  44% /sys/firmware/efi/efivars
/dev/nvme0n1p2         1046512    11936   1034576   2% /boot/efi
hdd-zeus             471462912      128 471462784   1% /hdd-zeus
/dev/fuse               131072       24    131048   1% /etc/pve
tmpfs                  1415928        0   1415928   0% /run/user/0

Code:
root@home-node-5500:~# lsblk
NAME                                               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                  8:0    0 465.8G  0 disk
└─sda1                                               8:1    0 465.8G  0 part
nvme0n1                                            259:0    0 476.9G  0 disk
├─nvme0n1p1                                        259:1    0  1007K  0 part
├─nvme0n1p2                                        259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                                        259:3    0 475.9G  0 part
  ├─pve-swap                                       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                                 252:2    0   3.6G  0 lvm 
  │ └─pve-data-tpool                               252:4    0 348.8G  0 lvm 
  │   ├─pve-data                                   252:5    0 348.8G  1 lvm 
  │   ├─pve-vm--103--state--snapshot--25--10--2024 252:6    0   4.5G  0 lvm 
  │   ├─pve-base--999--disk--0                     252:7    0    10G  1 lvm 
  │   ├─pve-vm--254--disk--0                       252:8    0   128M  0 lvm 
  │   ├─pve-vm--103--disk--0                       252:9    0    10G  0 lvm 
  │   ├─pve-vm--100--disk--0                       252:10   0    90G  0 lvm 
  │   ├─pve-vm--101--disk--0                       252:11   0    80G  0 lvm 
  │   ├─pve-vm--102--disk--0                       252:13   0   110G  0 lvm 
  │   └─pve-vm--104--disk--0                       252:14   0     8G  0 lvm 
  └─pve-data_tdata                                 252:3    0 348.8G  0 lvm 
    └─pve-data-tpool                               252:4    0 348.8G  0 lvm 
      ├─pve-data                                   252:5    0 348.8G  1 lvm 
      ├─pve-vm--103--state--snapshot--25--10--2024 252:6    0   4.5G  0 lvm 
      ├─pve-base--999--disk--0                     252:7    0    10G  1 lvm 
      ├─pve-vm--254--disk--0                       252:8    0   128M  0 lvm 
      ├─pve-vm--103--disk--0                       252:9    0    10G  0 lvm 
      ├─pve-vm--100--disk--0                       252:10   0    90G  0 lvm 
      ├─pve-vm--101--disk--0                       252:11   0    80G  0 lvm 
      ├─pve-vm--102--disk--0                       252:13   0   110G  0 lvm 
      └─pve-vm--104--disk--0                       252:14   0     8G  0 lvm
 
Just one more thing, I can't restart that node so if there is solution without restart awesome, else, I will have to wait long time until restart.
 
You should plug the wipefs error into google and go through the suggestions available there, i.e.:
https://askubuntu.com/questions/926698/wipefs-device-or-resource-busy
https://askubuntu.com/questions/649052/error-wiping-newly-created-partition-dev-sdb1

It also sounds like you used this disk previously with PVE? What does /etc/pve/storage.cfg say? Any remnants of it there?
Restart PVE services: systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler pve-ha-lrm


If things are still not working, then post the actual commands you ran and errors your received.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you bbgeek!

I managed to wipe it with wipefs -af /dev/sda but still can't create via Proxmox GUI.


Code:
root@home-node-5500:~# sudo umount /dev/sda*
umount: /dev/sda: not mounted.
umount: /dev/sda1: not mounted.

This is output of cat /etc/pve/storage.cfg

Code:
root@home-node-5500:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!