Delete old disks

Byron

Member
Apr 2, 2019
19
1
8
44
I'd like to clear my drive from old leftovers which remained when updating my install.
There's several drives/partitions listed like this:

Code:
Disk /dev/mapper/pve--OLD--C576DBC9-vm--104--disk--0: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: D7B0D570-110A-4387-BD01-B833B43999A8

Device                                                Start       End   Sectors  Size Type
/dev/mapper/pve--OLD--C576DBC9-vm--104--disk--0-part1  2048      4095      2048    1M BIOS boot
/dev/mapper/pve--OLD--C576DBC9-vm--104--disk--0-part2  4096 419428351 419424256  200G Linux filesystem

How do I get rid of that?
I've tried deleting the partitions in the disk but the disk remains:

Code:
Disk /dev/mapper/pve--OLD--C576DBC9-vm--103--disk--0: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: D7B0D570-110A-4387-BD01-B833B43999A8

I'll admit that I have little understanding of how I can properly use my 2 disks.
Edit: lsblk made it a lot clearer what is where:

Code:
NAME                                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1                                     259:0    0 931.5G  0 disk
├─nvme0n1p1                                 259:5    0  1007K  0 part
├─nvme0n1p2                                 259:6    0   512M  0 part /boot/efi
└─nvme0n1p3                                 259:7    0   931G  0 part
  ├─pve-swap                                253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                          253:2    0   8.1G  0 lvm
  │ └─pve-data-tpool                        253:4    0 794.8G  0 lvm
  │   ├─pve-data                            253:5    0 794.8G  0 lvm
  │   ├─pve-vm--100--disk--0                253:6    0   800G  0 lvm
  │   └─pve-vm--101--disk--0                253:17   0   100G  0 lvm
  └─pve-data_tdata                          253:3    0 794.8G  0 lvm
    └─pve-data-tpool                        253:4    0 794.8G  0 lvm
      ├─pve-data                            253:5    0 794.8G  0 lvm
      ├─pve-vm--100--disk--0                253:6    0   800G  0 lvm
      └─pve-vm--101--disk--0                253:17   0   100G  0 lvm
nvme1n1                                     259:1    0 931.5G  0 disk
├─nvme1n1p1                                 259:2    0  1007K  0 part
├─nvme1n1p2                                 259:3    0   512M  0 part
└─nvme1n1p3                                 259:4    0 899.5G  0 part
  ├─pve--OLD--C576DBC9-swap                 253:7    0     8G  0 lvm
  ├─pve--OLD--C576DBC9-root                 253:8    0    50G  0 lvm
  ├─pve--OLD--C576DBC9-data_tmeta           253:9    0   8.3G  0 lvm
  │ └─pve--OLD--C576DBC9-data-tpool         253:11   0   809G  0 lvm
  │   ├─pve--OLD--C576DBC9-data             253:12   0   809G  0 lvm
  │   ├─pve--OLD--C576DBC9-vm--101--disk--0 253:13   0   200G  0 lvm
  │   ├─pve--OLD--C576DBC9-vm--102--disk--0 253:14   0   200G  0 lvm
  │   ├─pve--OLD--C576DBC9-vm--104--disk--0 253:15   0   200G  0 lvm
  │   └─pve--OLD--C576DBC9-vm--103--disk--0 253:16   0   200G  0 lvm
  └─pve--OLD--C576DBC9-data_tdata           253:10   0   809G  0 lvm
    └─pve--OLD--C576DBC9-data-tpool         253:11   0   809G  0 lvm
      ├─pve--OLD--C576DBC9-data             253:12   0   809G  0 lvm
      ├─pve--OLD--C576DBC9-vm--101--disk--0 253:13   0   200G  0 lvm
      ├─pve--OLD--C576DBC9-vm--102--disk--0 253:14   0   200G  0 lvm
      ├─pve--OLD--C576DBC9-vm--104--disk--0 253:15   0   200G  0 lvm
      └─pve--OLD--C576DBC9-vm--103--disk--0 253:16   0   200G  0 lvm

I'm pretty sure the second disk (nvme1n1) can be completely wiped, is there a way I can be sure? How would I be able to use the disk for VM's?
 
Last edited:
My approach would be the following:
- are there any VMs with IDs 101 onwards (102,103,104)
If yes, check if any of their configuration files points to the devices you have mentioned.
if no, then you should be save to delete them.

How would I be able to use the disk for VM's?
I have never used LVM attached storage (and it seems that you are doing so) so I can't speak from experience. However I have noticed that Proxmox requires the disks to be in a certain format so you can attach them through the UI.
So you would need to setup a VM and edit the cfg-file manually to attach the disks.
 
hello I have also one old disk:
root@proxmox:~# pvs PV VG Fmt Attr PSize PFree /dev/sda3 pve lvm2 a-- <465.26g <16.00g /dev/sdb3 pve-OLD-34F0BDAD lvm2 a-- 59.12g 59.12g

I want delete it because it's free and make ZFS disk, it is possible?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!