[SOLVED] Error Create Disks OSD (Mounting filesystem failed)

felipemb

Active Member
Nov 22, 2014
28
4
43
Hi,

I have configured 3 nodes in a Cluster Proxmox VE 4.4 and my problem is when i create the disk OSD of 4TB, this is the error:

Code:
create OSD on /dev/sdb (xfs)

Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=243860917 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=975443665, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=476290, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
mount: unknown filesystem type 'xfs'
ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'noatime,inode64', '--', '/dev/sdb1', '/var/lib/ceph/tmp/mnt.PD8I4X']' returned non-zero exit status 32
TASK ERROR: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid a296998e-2bce-475d-b941-21a44157925e /dev/sdb' failed: exit code 1


My versions:

Code:
proxmox-ve: 4.4-88 (running kernel: 4.4.62-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-50
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-100
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 10.2.7-1~bpo80+1

Any idea what happend?

Thanks very much :)
 
is the package 'xfsprogs' installed?
is the 'xfs' module present and loaded?
 
My solution...

Reboot the nodes, delete de partition with parted:
  • #parted /dev/sdb
  • parted> rm 1 - Delete partition sdb1
  • parted> rm 2 - Delete partition sdb2
  • parted> quit
Back to create de Disk OSD /dev/sdb from GUI and worked fine!


Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!