Adding new drive > Bad magic number in super-block

altomarketing

Renowned Member
Dec 26, 2012
32
0
71
Hi, I have to add new drive to my server , and i follow step by step this instructions

But when i reach to resize2fs i have a problem.

Code:
 resize2fs /dev/mapper/pve-data
resize2fs 1.42.12 (29-Aug-2014
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.

I added a 5 Tr. hard drive (actually there are several drives but raid configured as one drive)

pvdisplay
Code:
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=3081229,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=4935708k,mode=755 0 0
/dev/dm-0 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs rw,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/ 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children,nsroot=/ 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct,nsroot=/ 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio,nsroot=/ 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory,nsroot=/ 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices,nsroot=/ 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer,nsroot=/ 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio,nsroot=/ 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/ 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/ 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/ 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=21,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
tmpfs /run/lxcfs/controllers tmpfs rw,relatime,size=100k,mode=700 0 0
pids /run/lxcfs/controllers/pids cgroup rw,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/ 0 0
hugetlb /run/lxcfs/controllers/hugetlb cgroup rw,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/ 0 0
perf_event /run/lxcfs/controllers/perf_event cgroup rw,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/ 0 0
net_cls,net_prio /run/lxcfs/controllers/net_cls,net_prio cgroup rw,relatime,net_cls,net_prio,nsroot=/ 0 0
freezer /run/lxcfs/controllers/freezer cgroup rw,relatime,freezer,nsroot=/ 0 0
devices /run/lxcfs/controllers/devices cgroup rw,relatime,devices,nsroot=/ 0 0
memory /run/lxcfs/controllers/memory cgroup rw,relatime,memory,nsroot=/ 0 0
blkio /run/lxcfs/controllers/blkio cgroup rw,relatime,blkio,nsroot=/ 0 0
cpu,cpuacct /run/lxcfs/controllers/cpu,cpuacct cgroup rw,relatime,cpu,cpuacct,nsroot=/ 0 0
cpuset /run/lxcfs/controllers/cpuset cgroup rw,relatime,cpuset,clone_children,nsroot=/ 0 0
name=systemd /run/lxcfs/controllers/name=systemd cgroup rw,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/ 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
cgmfs /run/cgmanager/fs tmpfs rw,relatime,size=100k,mode=755 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0

cat /etc/fstab
Code:
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

mount
Code:
mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=3081229,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=4935708k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children,nsroot=/)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct,nsroot=/)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio,nsroot=/)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory,nsroot=/)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices,nsroot=/)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer,nsroot=/)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio,nsroot=/)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=21,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/lxcfs/controllers type tmpfs (rw,relatime,size=100k,mode=700)
pids on /run/lxcfs/controllers/pids type cgroup (rw,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/)
hugetlb on /run/lxcfs/controllers/hugetlb type cgroup (rw,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
perf_event on /run/lxcfs/controllers/perf_event type cgroup (rw,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/)
net_cls,net_prio on /run/lxcfs/controllers/net_cls,net_prio type cgroup (rw,relatime,net_cls,net_prio,nsroot=/)
freezer on /run/lxcfs/controllers/freezer type cgroup (rw,relatime,freezer,nsroot=/)
devices on /run/lxcfs/controllers/devices type cgroup (rw,relatime,devices,nsroot=/)
memory on /run/lxcfs/controllers/memory type cgroup (rw,relatime,memory,nsroot=/)
blkio on /run/lxcfs/controllers/blkio type cgroup (rw,relatime,blkio,nsroot=/)
cpu,cpuacct on /run/lxcfs/controllers/cpu,cpuacct type cgroup (rw,relatime,cpu,cpuacct,nsroot=/)
cpuset on /run/lxcfs/controllers/cpuset type cgroup (rw,relatime,cpuset,clone_children,nsroot=/)
name=systemd on /run/lxcfs/controllers/name=systemd type cgroup (rw,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

df -k
Code:
Filesystem     1K-blocks    Used Available Use% Mounted on
udev               10240       0     10240   0% /dev
tmpfs            4935708    9248   4926460   1% /run
/dev/dm-0       59732092 1385648  55289192   3% /
tmpfs           12339268   61248  12278020   1% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs           12339268       0  12339268   0% /sys/fs/cgroup
tmpfs                100       0       100   0% /run/lxcfs/controllers
cgmfs                100       0       100   0% /run/cgmanager/fs
/dev/fuse          30720      20     30700   1% /etc/pve

Actually in my GUI i see new drive that but What i did wrong that i can not do resize2fs ?

PROX.PNG ?
 
More info:

Code:
root@amor:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 232.4G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   127M  0 part
└─sda3               8:3    0 232.3G  0 part
  ├─pve-root       251:0    0    58G  0 lvm  /
  ├─pve-swap       251:1    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta 251:2    0    76M  0 lvm
  │ └─pve-data     251:4    0   5.6T  0 lvm
  └─pve-data_tdata 251:3    0   5.6T  0 lvm
    └─pve-data     251:4    0   5.6T  0 lvm
sdb                  8:16   0   5.5T  0 disk
└─sdb1               8:17   0   5.5T  0 part
  └─pve-data_tdata 251:3    0   5.6T  0 lvm
    └─pve-data     251:4    0   5.6T  0 lvm
 
since proxmox ve 4.2 there is no lv data anymore but a lvm thinpool data
see https://pve.proxmox.com/wiki/LVM2

so you cannot mount pve/data or resize its file system (since it has none)
but you already extended it.

CAUTION: please not the section about extending the metadata size in the wiki link above
(else you maybe get corruptions)

i will see if i have the time to update the wiki article you followed
 
since proxmox ve 4.2 there is no lv data anymore but a lvm thinpool data
see https://pve.proxmox.com/wiki/LVM2

so you cannot mount pve/data or resize its file system (since it has none)
but you already extended it.

Ok, should i erase what i did and do like detailed here https://pve.proxmox.com/wiki/LVM2#Configuration_2 ?



I did not understand this " please not the section about extending the metadata size in the wiki link above (else you maybe get corruptions)"

Thanks a lot !!! most of us learn from experts like you with your wikis ! There is no better learning method than receiving your expertise !!
 
I did not understand this " please not the section about extending the metadata size in the wiki link above (else you maybe get corruptions)"
i mean this:

Resize metadata pool
Note: If the pool will extend, then it is be necessary to extent also the metadata pool.
It can be achieved with the following command.

lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>
 
Hi there... I just followed the same steps myself and at the end get the same error:
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.

My company is testing out Promox v5 and my knowledge is limited with LVM. I added a second hard drive followed all directions. Everything is good until the end. I see that you say to use lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool>

But you have to use a logical name for this part at least thats what the error says. What is the default name for this? Sorry I am not very good at LVM yet. But I am stuck at this part. Please, Please help! :)

Thanks,
Ricardo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!