Can Not Remove Disk from pve-data

dannyman

New Member
I inherited this setup from the previous regime. A cluster of Proxmox 3.1. Each host has four disks. sda is the system disk, and sdb, sdc, sdd are allocated to /var/lib/vz:

Code:
# df -h
Filesystem                                               Size  Used Avail Use% Mounted on
udev                                                      10M     0   10M   0% /dev
tmpfs                                                     13G  428K   13G   1% /run
/dev/mapper/pve-root                                      95G  1.7G   89G   2% /
tmpfs                                                    5.0M     0  5.0M   0% /run/lock
tmpfs                                                     26G   47M   26G   1% /run/shm
/dev/mapper/pve-data                                     1.6T  139G  1.5T   9% /var/lib/vz
/dev/sda1                                                495M   60M  411M  13% /boot
/dev/fuse                                                 30M  220K   30M   1% /etc/pve
nas:/openvz                                               12T  278G   12T   3% /mnt/pve/NAS

Okay, so, sdb is failing and needs to be replaced:
Code:
# smartctl -a /dev/sdb
[ . . . ]
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
See vendor-specific Attribute list for failed Attributes.
[ . . . ]

So, I RMA the disk and now I have the spare to swap in. Let's get sdb out of there!

Here's my first eyebrow:
Code:
# pvdisplay /dev/sdb1 /dev/sdc1 /dev/sdd1
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vgdata
  PV Size               1.82 TiB / not usable 2.56 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               XrKI9q-DVVO-SKXo-2csj-jdcq-hSPf-Vml73Z
   
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               vgdata
  PV Size               1.82 TiB / not usable 2.56 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               VkHc8s-s2Wm-1ZZv-JHGB-SRTM-tXup-qv8QEq
   
  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               vgdata
  PV Size               1.82 TiB / not usable 2.56 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               4qqcWX-eHOv-qEff-LL1X-7nH0-bA4q-5FeZcd
   
# lvdisplay vgdata
  --- Logical volume ---
  LV Path                /dev/vgdata/lvol0
  LV Name                lvol0
  VG Name                vgdata
  LV UUID                4AiAOs-X4MD-DlmP-b2uo-Z2tV-1Wme-WIEbqT
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                5.46 TiB
  Current LE             1430793
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
# df -h /var/lib/vz
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-data  1.6T  139G  1.5T   9% /var/lib/vz

Why, if there are 3x 2TB disks is the filesystem 1.5TB?

Here's where I get in trouble:
Code:
# pvs -o+pv_attr,lv_attr
  PV         VG     Fmt  Attr PSize PFree  Attr Attr     
  /dev/sda2  pve    lvm2 a--  1.82t 16.00g a--  -wi-ao---
  /dev/sda2  pve    lvm2 a--  1.82t 16.00g a--  -wi-ao---
  /dev/sda2  pve    lvm2 a--  1.82t 16.00g a--  -wi-ao---
  /dev/sda2  pve    lvm2 a--  1.82t 16.00g a--           
  /dev/sdb1  vgdata lvm2 a--  1.82t     0  a--  -wi-a----
  /dev/sdc1  vgdata lvm2 a--  1.82t     0  a--  -wi-a----
  /dev/sdd1  vgdata lvm2 a--  1.82t     0  a--  -wi-a----

No free space on any of the pvs. And the lv_attr do not indicate any sort of RAID ... I can not vgmove or vgreduce!

Any suggestions here?

Thanks,
-danny
 
I inherited this setup from the previous regime. A cluster of Proxmox 3.1. Each host has four disks. sda is the system disk, and sdb, sdc, sdd are allocated to /var/lib/vz:

Code:
# df -h
Filesystem                                               Size  Used Avail Use% Mounted on
udev                                                      10M     0   10M   0% /dev
tmpfs                                                     13G  428K   13G   1% /run
/dev/mapper/pve-root                                      95G  1.7G   89G   2% /
tmpfs                                                    5.0M     0  5.0M   0% /run/lock
tmpfs                                                     26G   47M   26G   1% /run/shm
/dev/mapper/pve-data                                     1.6T  139G  1.5T   9% /var/lib/vz
/dev/sda1                                                495M   60M  411M  13% /boot
/dev/fuse                                                 30M  220K   30M   1% /etc/pve
nas:/openvz                                               12T  278G   12T   3% /mnt/pve/NAS
sorry - this is not true - sdb,sdc and sdd are not on /dev/pve/data (which mean /dev/mapper/pve-data).
then...
...
Code:
# pvdisplay /dev/sdb1 /dev/sdc1 /dev/sdd1
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vgdata
  PV Size               1.82 TiB / not usable 2.56 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               XrKI9q-DVVO-SKXo-2csj-jdcq-hSPf-Vml73Z
...


Any suggestions here?

Thanks,
-danny
Don't know, where you use /dev/vgdata, but I think perhaps for kvm-storage?

Take a look at /etc/pve/storage.cfg

Udo
 
Yes. It looks like the data volume is over on sda, and this other thing is a 3-disk stripe that is unused:

Code:
# dmsetup status 
vgdata-lvol0: 0 11721056256 striped 3 8:17 8:33 8:49 1 AAA
pve-swap: 0 262144000 linear 
pve-root: 0 201326592 linear 
pve-data: 0 3408961536 linear 
root@dvm0:/home/djh# dmsetup ls --tree 
vgdata-lvol0 (253:1)
 ├─ (8:49)
 ├─ (8:33)
 └─ (8:17)
pve-swap (253:2)
 └─ (8:2)
pve-root (253:0)
 └─ (8:2)
pve-data (253:3)
 └─ (8:2)
# ls -l /dev/sd*
brw-rw---T 1 root disk 8,  0 Feb  4 17:09 /dev/sda
brw-rw---T 1 root disk 8,  1 Feb  4 17:09 /dev/sda1
brw-rw---T 1 root disk 8,  2 Feb  4 17:09 /dev/sda2
brw-rw---T 1 root disk 8, 16 Feb  4 17:09 /dev/sdb
brw-rw---T 1 root disk 8, 17 Feb  4 17:09 /dev/sdb1
brw-rw---T 1 root disk 8, 32 Feb  4 17:09 /dev/sdc
brw-rw---T 1 root disk 8, 33 Feb  4 17:09 /dev/sdc1
brw-rw---T 1 root disk 8, 48 Feb  4 17:09 /dev/sdd
brw-rw---T 1 root disk 8, 49 Feb  4 17:09 /dev/sdd1
# df -hl
Filesystem            Size  Used Avail Use% Mounted on
udev                   10M     0   10M   0% /dev
tmpfs                  13G  428K   13G   1% /run
/dev/mapper/pve-root   95G  1.7G   89G   2% /
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  26G   47M   26G   1% /run/shm
/dev/mapper/pve-data  1.6T  139G  1.5T   9% /var/lib/vz
/dev/sda1             495M   60M  411M  13% /boot
/dev/fuse              30M  220K   30M   1% /etc/pve
# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=88c4144b-bb91-4fce-bbfe-4a1581572ae4 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

Mounting /dev/mapper/vgdata-lvol0 shows what looks like some old experiment from two years ago. Not a ProxMox thing at all.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!