Hi, My previous Proxmox installation was causing me loads of issues... I was finding it dropping offline once per day and I would lose clients randomly during the day. I decided that problems could have been caused by my previous tinkering, so I wanted to start a fresh.
I previously had a single ZFS RAID storage disk with everything installed. I used Proxmox to create the ZFS volume. I thought I would install another HDD, install a fresh copy of proxmox on it and hope that it would pick up the old ZFS drive and let me import the old Guests onto my new Proxmox.
Unfortunately it has not worked out that way. I can see the disks under PVE>Disks with Usage value of ZFS.
if I open the shell and type - $zfs list I get the following output:
I'm guessing its just not mounted properly or something... could anybody please point me in the right direction?
I previously had a single ZFS RAID storage disk with everything installed. I used Proxmox to create the ZFS volume. I thought I would install another HDD, install a fresh copy of proxmox on it and hope that it would pick up the old ZFS drive and let me import the old Guests onto my new Proxmox.
Unfortunately it has not worked out that way. I can see the disks under PVE>Disks with Usage value of ZFS.
if I open the shell and type - $zfs list I get the following output:
Code:
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.98T 586G 151K /rpool
rpool/ROOT 6.61G 586G 140K /rpool/ROOT
rpool/ROOT/pve-1 6.61G 586G 6.61G /
rpool/data 1.97T 586G 151K /rpool/data
rpool/data/subvol-103-disk-0 1.12G 30.9G 1.06G /rpool/data/subvol-103-disk-0
rpool/data/subvol-105-disk-0 1.25G 30.7G 1.25G /rpool/data/subvol-105-disk-0
rpool/data/vm-100-disk-0 5.59G 586G 5.59G -
rpool/data/vm-101-disk-0 5.72G 586G 5.72G -
rpool/data/vm-102-disk-1 1.96T 586G 1.96T -
rpool/data/vm-104-disk-0 1.49G 586G 1.49G -
I'm guessing its just not mounted properly or something... could anybody please point me in the right direction?