[SOLVED] Reinstall Proxmox to move from LVM boot disk to ZFS (RAIDZ-1)

mcaroberts

New Member
Feb 14, 2024
12
1
3
I'm new to proxmox and linux alike. I've been an IT professional for a long time in the windows world. I'm really enjoying learning linux and proxmox. I have a Dell T460 with a single 1tb sata boot drive and a Proc H730p running a raid 6 of 8 10tb drives. I have 6 more 1tb drives that I would like to use as my boot/local storage, mainly because I keep running out of space on my current 1tb drive.

I would like to reinstall proxmox on the 7 1tb drives using ZFS (RAIDZ-1). Another option I have is I also have a Proc H330 I could use to put those drives in a raid 5 config, but if I remember when installing proxmox the first time it didn't see my raid 6 config till I was in the web console (it wasn't an option to install proxmox to). The issue I have is that I have 1 VM with 2 14tb drives attached and 1 VM with 2 10tb drives attached, and I don't have anything big enough to back that up to.

Is it possible to reinstall proxmox to either a ZFS or HW raid and restore the OS from backup and restore the LXC's/VM's without those larger drives attached, reattach the raid 6 storage without losing the data then reattach those large disks back to the VM's? If so what should I be backing up (which files and folders)?

If I'm making sense, if anyone has a better idea to achieve the about please, I'm all ears. Thanks in advance.
 

Attachments

  • Screenshot 2024-02-27.png
    Screenshot 2024-02-27.png
    16.9 KB · Views: 8
Ok, I used the easy button (maybe lol). I put my 6 1TB drives on the Proc H330 in a RAID 5 and used Clonezilla to clone my pve boot disk to it. It worked pretty well. On first boot after the clone it boot to emergency mode where I had to remove a line in /etc/fstab related to a USB backup drive I had and then I had to run a update-grub. After which and a reboot everything came up fine.

Now the issue I'm having is that my LVM-Thin pool is still showing that it is 852.88 GB. I'm not sure how to resize it. I've done some google-ing but I'm not sure if I'm looking at the right thing or not.

pvs:
Code:
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sda3  pve  lvm2 a--   4.54t  3.65t
  /dev/sdb   Raid lvm2 a--  54.57t 17.05t

vgs:
Code:
  VG   #PV #LV #SN Attr   VSize  VFree
  Raid   1  18   0 wz--n- 54.57t 17.05t
  pve    1   3   0 wz--n-  4.54t  3.65t

lvs:
Code:
  LV            VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0 Raid -wi-a-----    8.00g                                                   
  vm-101-disk-0 Raid -wi-a-----    2.00g                                                   
  vm-108-disk-0 Raid -wi-a-----    8.00g                                                   
  vm-112-disk-0 Raid -wi-a-----    2.00g                                                   
  vm-202-disk-0 Raid -wi-a-----    4.00m                                                   
  vm-202-disk-1 Raid -wi-a-----  200.00g                                                   
  vm-203-disk-0 Raid -wi-a-----    4.00m                                                   
  vm-203-disk-1 Raid -wi-a-----  100.00g                                                   
  vm-203-disk-2 Raid -wi-a-----    4.00m                                                   
  vm-203-disk-3 Raid -wi-a-----   <9.77t                                                   
  vm-203-disk-4 Raid -wi-a----- 1000.00g                                                   
  vm-204-disk-0 Raid -wi-a-----    4.00m                                                   
  vm-204-disk-1 Raid -wi-a-----  100.00g                                                   
  vm-204-disk-2 Raid -wi-a-----    4.00m                                                   
  vm-204-disk-3 Raid -wi-a-----   13.67t                                                   
  vm-204-disk-4 Raid -wi-a-----  <11.72t                                                   
  vm-204-disk-5 Raid -wi-a-----  500.00g                                                   
  vm-204-disk-6 Raid -wi-a-----  500.00g                                                   
  data          pve  twi-aotz--  794.30g             0.00   0.24                           
  root          pve  -wi-ao----   96.00g                                                   
  swap          pve  -wi-ao----    8.00g

lsblk:
Code:
NAME                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                    8:0    0   4.5T  0 disk
├─sda1                 8:1    0   4.9M  0 part
├─sda2                 8:2    0     1G  0 part /boot/efi
└─sda3                 8:3    0   4.5T  0 part
  ├─pve-swap         252:18   0     8G  0 lvm  [SWAP]
  ├─pve-root         252:19   0    96G  0 lvm  /
  ├─pve-data_tmeta   252:20   0   8.1G  0 lvm 
  │ └─pve-data-tpool 252:22   0 794.3G  0 lvm 
  │   └─pve-data     252:23   0 794.3G  1 lvm 
  └─pve-data_tdata   252:21   0 794.3G  0 lvm 
    └─pve-data-tpool 252:22   0 794.3G  0 lvm 
      └─pve-data     252:23   0 794.3G  1 lvm 
root@Mike-Proxmox:/# lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                       8:0    0   4.5T  0 disk
├─sda1                    8:1    0   4.9M  0 part
├─sda2                    8:2    0     1G  0 part /boot/efi
└─sda3                    8:3    0   4.5T  0 part
  ├─pve-swap            252:18   0     8G  0 lvm  [SWAP]
  ├─pve-root            252:19   0    96G  0 lvm  /
  ├─pve-data_tmeta      252:20   0   8.1G  0 lvm 
  │ └─pve-data-tpool    252:22   0 794.3G  0 lvm 
  │   └─pve-data        252:23   0 794.3G  1 lvm 
  └─pve-data_tdata      252:21   0 794.3G  0 lvm 
    └─pve-data-tpool    252:22   0 794.3G  0 lvm 
      └─pve-data        252:23   0 794.3G  1 lvm 
sdb                       8:16   0  54.6T  0 disk
├─Raid-vm--202--disk--0 252:0    0     4M  0 lvm 
├─Raid-vm--202--disk--1 252:1    0   200G  0 lvm 
├─Raid-vm--108--disk--0 252:2    0     8G  0 lvm 
├─Raid-vm--203--disk--0 252:3    0     4M  0 lvm 
├─Raid-vm--203--disk--1 252:4    0   100G  0 lvm 
├─Raid-vm--203--disk--2 252:5    0     4M  0 lvm 
├─Raid-vm--203--disk--4 252:6    0  1000G  0 lvm 
├─Raid-vm--204--disk--0 252:7    0     4M  0 lvm 
├─Raid-vm--204--disk--1 252:8    0   100G  0 lvm 
├─Raid-vm--204--disk--2 252:9    0     4M  0 lvm 
├─Raid-vm--204--disk--3 252:10   0  13.7T  0 lvm 
├─Raid-vm--204--disk--4 252:11   0  11.7T  0 lvm 
├─Raid-vm--204--disk--5 252:12   0   500G  0 lvm 
├─Raid-vm--204--disk--6 252:13   0   500G  0 lvm 
├─Raid-vm--203--disk--3 252:14   0   9.8T  0 lvm 
├─Raid-vm--101--disk--0 252:15   0     2G  0 lvm 
├─Raid-vm--112--disk--0 252:16   0     2G  0 lvm 
└─Raid-vm--100--disk--0 252:17   0     8G  0 lvm 
sdc                       8:32   0   9.1T  0 disk
sdd                       8:48   1  14.6G  0 disk
├─sdd1                    8:49   1   240K  0 part
├─sdd2                    8:50   1     8M  0 part
├─sdd3                    8:51   1   1.2G  0 part
└─sdd4                    8:52   1   300K  0 part
sr0                      11:0    1  1024M  0 rom

Disk in question ...
Code:
NAME                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                    8:0    0   4.5T  0 disk
├─sda1                 8:1    0   4.9M  0 part
├─sda2                 8:2    0     1G  0 part /boot/efi
└─sda3                 8:3    0   4.5T  0 part
  ├─pve-swap         252:18   0     8G  0 lvm  [SWAP]
  ├─pve-root         252:19   0    96G  0 lvm  /
  ├─pve-data_tmeta   252:20   0   8.1G  0 lvm 
  │ └─pve-data-tpool 252:22   0 794.3G  0 lvm 
  │   └─pve-data     252:23   0 794.3G  1 lvm 
  └─pve-data_tdata   252:21   0 794.3G  0 lvm 
    └─pve-data-tpool 252:22   0 794.3G  0 lvm 
      └─pve-data     252:23   0 794.3G  1 lvm


Screenshot 2024-02-28 180356.png

Screenshot 2024-02-28.png

So you can see it shows the partition as 5TB but the volume as 852GB, how do I expand the LVM to use the remaining space? Currently there is no data on the LVM-thin volume I moved it all to the LVM volume called "RAID". Any help would be appreciated.
 
Since I still had everything on my original 1TB drive, I disconnected it and changed the 6 1TB from Raid to non raid, started with a fresh install of proxmox using ZFS (RAIDZ-1). But the performance wasn't good in comparison to running with the HW raid. So I went back and switch the 6 1TB drives back to HW raid 5 and did another fresh install.

I found this script (thank you DerDanilo) to back up the proxmox host. Then backup all my LXC's to a usb drive. After the fresh install I did a restore and supper surprised to see it saw my 60TB raid 6 storage and all the data was still there. I was thinking I would have to re-add that storage and doing so would wipe my disk. The whole reason I started this thread. I didn't have anywhere to back up my 2 VM's that are using 38 TB of storage. Anyway after the restore I copied over my VM config's to the new proxmox install and started the VM's ... done!

So all that to say I was way over thinking this! You would think for as long as I've been in IT I would have been started then this. I guess in my old age I'm getting to overly cautious. I'm really liking proxmox, am glad I maid the switch from ESXI. Now to get past the Linux learning curve.
 
Last edited:
  • Like
Reactions: Kingneutron