Re-adding storage drives to a new proxmox installation

Bishop27

Member
Jul 3, 2024
34
1
8
Good Afternoon,
I completely reinstalled proxmox to version 8.2.2 and Kernel version 6.8.4-2-pve, I have 3 drives that I want to re-add to this version:
- ZFS Pool called BACKUPS
- Directory called ISOIMAGES
- Directory called VM-STORAGE
Can someone help me out as to the steps I need to take to add these storages back to proxmox version 8.2.2 afterwards I can restore the vm's and get my system back online.
Any help would be truly appreciated. Thank you in advance.
 
- ZFS Pool called BACKUPS
Enter the zpool import command in PVE host CLI, then in GUI Add ZFS storage to the datacenter using the original ID/Name with the original ZFS Pool selected.

- Directory called ISOIMAGES
- Directory called VM-STORAGE
Assuming these drives are regular fs formatted (ext4) , just mount these drives ( & add to /etc/fstab if you need them permanently, using the UUID or PARTUUID of the partition of your relevant disk), & then in GUI Add Directory storage to the datacenter using the original ID/Name and specifying the correct full path of the mountpoint you made under Directory: & repeat for each disk.

Make sure in all of above you set the correct Storage Content: to the required types you require.
 
Enter the zpool import command in PVE host CLI, then in GUI Add ZFS storage to the datacenter using the original ID/Name with the original ZFS Pool selected.


Assuming these drives are regular fs formatted (ext4) , just mount these drives ( & add to /etc/fstab if you need them permanently, using the UUID or PARTUUID of the partition of your relevant disk), & then in GUI Add Directory storage to the datacenter using the original ID/Name and specifying the correct full path of the mountpoint you made under Directory: & repeat for each disk.

Make sure in all of above you set the correct Storage Content: to the required types you require.
Ok, So when I added the ZFS Pool drive the only content selection it gave me was Disk image and container. The ZFS Pool was where I stored all the VM backups. When I added the ZFS storage it should have given me the VZdump so I can restore the vm's?
 
When I added the ZFS storage it should have given me the VZdump so I can restore the vm's?
Proxmox storage for ZFS supports images & rootdir only, as per the official docs here. So you must of had something else going on on-top of ZFS to store the backups.
 
Proxmox storage for ZFS supports images & rootdir only, as per the official docs here. So you must of had something else going on on-top of ZFS to store the backups.
Do you mind walking me through the steps to mount the other storages VM-Storage and ISOIMAGES, I’m kinda new at this and have little Linux experience. Thank you in advance.
 
Do you mind walking me through the steps to mount the other storages VM-Storage and ISOIMAGES, I’m kinda new at this and have little Linux experience.
OK, I'll try & help - time permitting.

I already gave you the basic steps needed above, so we'll follow those:
Assuming these drives are regular fs formatted (ext4) , just mount these drives ( & add to /etc/fstab if you need them permanently, using the UUID or PARTUUID of the partition of your relevant disk), & then in GUI Add Directory storage to the datacenter using the original ID/Name and specifying the correct full path of the mountpoint you made under Directory: & repeat for each disk.
So to start with: assuming, the drives are physically connected to the server, please post output from the command lsblk from the host shell. Try & post it here in the forum using the CODE-editor as its easier to read. You'll find the CODE-editor on the formatting bar of your post, its marked
"</>", press it & enter the output).
 
OK, I'll try & help - time permitting.

I already gave you the basic steps needed above, so we'll follow those:

So to start with: assuming, the drives are physically connected to the server, please post output from the command lsblk from the host shell. Try & post it here in the forum using the CODE-editor as its easier to read. You'll find the CODE-editor on the formatting bar of your post, its marked
"</>", press it & enter the output).
Code:
root@proxVE:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 476.9G  0 disk
└─sda1               8:1    0 476.9G  0 part
sdb                  8:16   0   1.9T  0 disk
└─sdb1               8:17   0   1.9T  0 part
sdc                  8:32   0   1.8T  0 disk
├─sdc1               8:33   0   1.8T  0 part
└─sdc9               8:41   0     8M  0 part
sdd                  8:48   0   1.8T  0 disk
├─sdd1               8:49   0   1.8T  0 part
└─sdd9               8:57   0     8M  0 part
nvme0n1            259:0    0 476.9G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0     1G  0 part /boot/efi
└─nvme0n1p3        259:3    0 475.9G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   3.6G  0 lvm 
  │ └─pve-data     252:4    0 348.8G  0 lvm 
  └─pve-data_tdata 252:3    0 348.8G  0 lvm 
    └─pve-data     252:4    0 348.8G  0 lvm
 
Code:
root@proxVE:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 476.9G  0 disk
└─sda1               8:1    0 476.9G  0 part
sdb                  8:16   0   1.9T  0 disk
└─sdb1               8:17   0   1.9T  0 part
sdc                  8:32   0   1.8T  0 disk
├─sdc1               8:33   0   1.8T  0 part
└─sdc9               8:41   0     8M  0 part
sdd                  8:48   0   1.8T  0 disk
├─sdd1               8:49   0   1.8T  0 part
└─sdd9               8:57   0     8M  0 part
nvme0n1            259:0    0 476.9G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0     1G  0 part /boot/efi
└─nvme0n1p3        259:3    0 475.9G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   3.6G  0 lvm
  │ └─pve-data     252:4    0 348.8G  0 lvm
  └─pve-data_tdata 252:3    0 348.8G  0 lvm
    └─pve-data     252:4    0 348.8G  0 lvm
Code:
root@proxVE:~# lsblk -fs
NAME             FSTYPE  FSVER   LABEL   UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sda1             ext4    1.0             c9ba9cd6-77cc-41b9-8252-703090260a65                 
└─sda                                                                                         
sdb1             ext4    1.0             658682c9-6b76-44db-a31f-710b699b7503                 
└─sdb
 
Proxmox storage for ZFS supports images & rootdir only, as per the official docs here. So you must of had something else going on on-top of ZFS to store the backups.
Oh and BTW, I know what I did with the ZFS Pool..... I created the Pool
OK, I'll try & help - time permitting.

I already gave you the basic steps needed above, so we'll follow those:

So to start with: assuming, the drives are physically connected to the server, please post output from the command lsblk from the host shell. Try & post it here in the forum using the CODE-editor as its easier to read. You'll find the CODE-editor on the formatting bar of your post, its marked
"</>", press it & enter the output).
BTW - This is how I setup my ZFS Pool -
https://www.youtube.com/watch?v=oSD-VoloQag&list=PL4a-xiXWjrQigIL-2k49M8dmp_cFuYrbz&index=26
 
Good Afternoon,
I completely reinstalled proxmox to version 8.2.2 and Kernel version 6.8.4-2-pve, I have 3 drives that I want to re-add to this version:
- ZFS Pool called BACKUPS
- Directory called ISOIMAGES
- Directory called VM-STORAGE
Can someone help me out as to the steps I need to take to add these storages back to proxmox version 8.2.2 afterwards I can restore the vm's and get my system back online.
Any help would be truly appreciated. Thank you in advance.
Enter the zpool import command in PVE host CLI, then in GUI Add ZFS storage to the datacenter using the original ID/Name with the original ZFS Pool selected.


Assuming these drives are regular fs formatted (ext4) , just mount these drives ( & add to /etc/fstab if you need them permanently, using the UUID or PARTUUID of the partition of your relevant disk), & then in GUI Add Directory storage to the datacenter using the original ID/Name and specifying the correct full path of the mountpoint you made under Directory: & repeat for each disk.

Make sure in all of above you set the correct Storage Content: to the required types you require.
Great News! I fixed the ZFS Pool, I can now see the backups!!! Whoop Whoop!. I just need you to help me with the other 2 storage drives ISOIMAGES and VM-Storage. Thank you in advance.
 
  • Like
Reactions: gfngfn256
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
 
Last edited:
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
root@Proxve01:~# mount -a
mount: /mnt/isoimages: mount point does not exist.
dmesg(1) may have more information after failed mount system call.
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
mount: /mnt/vmstorage: mount point does not exist.
dmesg(1) may have more information after failed mount system call.
root@Proxve01:~#
 
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
root@Proxve01:~# ls /mnt
ISOIMAGES VM-STORAGE
root@Proxve01:~#
 
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
Ok, I had to change the caps in the fstab file and now better results:
Code:
root@Proxve01:~# mount -a
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.
root@Proxve01:~# ls -la /mnt/ISOIMAGES
total 44
drwxr-xr-x 8 root root  4096 Jan 20 08:36 .
drwxr-xr-x 4 root root  4096 Jul  9 13:28 ..
drwxr-xr-x 2 root root  4096 Jan 20 08:36 dump
drwxr-xr-x 4 root root  4096 Mar 20 15:22 images
drwx------ 2 root root 16384 Jan 20 08:36 lost+found
drwxr-xr-x 2 root root  4096 Jan 20 08:36 private
drwxr-xr-x 2 root root  4096 Jan 20 08:36 snippets
drwxr-xr-x 4 root root  4096 Jan 20 08:36 template
root@Proxve01:~# ls -la /mnt/VM-STORAGE
total 44
drwxr-xr-x  8 root root  4096 Feb 10 21:19 .
drwxr-xr-x  4 root root  4096 Jul  9 13:28 ..
drwxr-xr-x  2 root root  4096 Feb 10 21:19 dump
drwxr-xr-x 16 root root  4096 Mar 26 13:56 images
drwx------  2 root root 16384 Feb 10 21:19 lost+found
drwxr-xr-x  2 root root  4096 Feb 10 21:19 private
drwxr-xr-x  2 root root  4096 Feb 10 21:19 snippets
drwxr-xr-x  4 root root  4096 Feb 10 21:19 template
root@Proxve01:~#
 
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
Should I run this systemctl reload also?
 
root@Proxve01:~# ls /mnt
ISOIMAGES VM-STORAGE
You are not very good at EXACTLY following instructions. You changed the names (capitalization etc. from what I told you).
Lets see if everything looks the way it should.
Please show the output for the following:
Code:
cat /etc/fstab
ls /mnt
 
Assuming the above command provides no result, enter the following:
Code:
mkdir /mnt/isoimages
mkdir /mnt/vmstorage

# Then do the following:

nano /etc/fstab

# Add the following 2 lines

UUID=c9ba9cd6-77cc-41b9-8252-703090260a65 /mnt/isoimages ext4 defaults 0 2
UUID=658682c9-6b76-44db-a31f-710b699b7503 /mnt/vmstorage ext4 defaults 0 2

# Exit & Save (CTRL + X, Y)

# Mount them with:

mount -a

# You can then check what you have with

ls -la /mnt/isoimages
ls -la /mnt/vmstorage
Now you are ready to set-up the Storage/s in the GUI.
When I setup the drive in the gui should the directory look like this - /mnt/ISOIMAGES
 
You seem to do what you like. Why don't you respond to my posts with ANSWERS to my questions?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!