[SOLVED] ZFS Issues

unkn0wnDnS

New Member
Mar 23, 2023
6
1
3
Hi all,

I use external thunderbolt storage as ZFS for replication and HA. And was setting up a new node with 2x 4TB SATA drives connected. < mainly because one other node has some storage issues.
Now I wanted to add them as a raid0 so I have 8TB of ZFS.

Without thinking and reading documents, I been using multiple methods of creating a raid0 and proxmox wouldn't recognize it because it's not a logical volume, so went around there, and again.
Went into the cli and used zpool command:
zpool create -o ashift=12 Storage /dev/sda /dev/sdb

After playing back and forth, I got errors about 'Storage' not found.

Again... without thinking I started to remove replication jobs and the ZFS on nodes, even from CLI and force remove them.

Now I'm in a state where VM's are on my node, online and running, which I'm unable to move to other nodes because of the broken ZFS.

But I'm also unable to move the drives to the local storage, as it shows me "Storage' is not available on node proxmox01"

I'd like to:
- mount additional drive to proxmox01 (and proxmox03, because there are a few VM's running there as well)
- move the drives of the VM's from the ZFS Storage to the drive, without breaking.
- setup ZFS again from scratch and move the drives to the ZFS again and hopefully able to sync and migrate between nodes

Hopefully someone can help me out, I'm out of idea's
 
Last edited:
Have you created the storage in PVE besides creating the actual ZFS pool?
Not sure what you mean. The nodes where I'm stuck with 01 and 03
Is where I mounted the Thunderbolt GRaid and then from the webgui wiped the drive and created the ZFS.
But when I try any action against the VM's (with their disk on the ZFS) I get the error that 'Storage' (my ZFS name) is not available.
Under '#pvesm status' it shows disabled.
And under '#zpool status' it's online.
(at least on proxmox01)

If there is anyone with a great idea I'd love to hear because I'm stuck here
 
The output of: cat /etc/pve/storage.cfg might be helpful.
 
The output of: cat /etc/pve/storage.cfg might be helpful.
Sure thing:
Code:
root@snl-proxmox01:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,backup,vztmpl

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

{DEL:} zfspool: Storage
{DEL:}    pool Storage
{DEL:}    content images,rootdir
{DEL:}    mountpoint /Storage
{DEL:}    nodes snl-brd-macpro02

Code:
root@snl-proxmox01:~# zpool status
  pool: Storage
 state: ONLINE
  scan: scrub repaired 0B in 01:01:04 with 0 errors on Sun Aug 11 01:25:06 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    Storage                                         ONLINE       0     0     0
      ata-G-RAID_with_Thunderbolt_DBDFFE40D4000000  ONLINE       0     0     0
errors: No known data errors

Code:
root@snl-proxmox01:~# pvesm status
Name             Type     Status           Total            Used       Available        %
{DEL:} Storage       zfspool   disabled               0               0               0      N/A
local             dir     active        98497780        14918476        78529756   15.15%
local-lvm     lvmthin     active       833728512               0       833728512    0.00%

I just updated this post (see {DEL:} lines, apparantly the /etc/pve/storage.cfg on the 01 showed nodes proxmox-02.
I been deleting ZFS storage here after I migrated the servers to other nodes because I want to take this host down and to my office to do diagnostics on it's storage drives as it comes with cluster issues.
After the issues above, I been recreating the storage and I guess I had 'Add storage' checked.

After removing the ZFS on the 02, it disappeared from the proxmox01 where I have the most important hosts on currently.

So it seems that the zfspool: Storage has disapeared from the 01.
 
Last edited:
So, your ZFS pool Storage is physically connected and imported/mounted on your PVE node snl-proxmox01, but you restrict that storage in the PVE storage configuration to your node snl-proxmox02; instead of snl-proxmox01.

If there are further problems, you might want to specify your setup, especially the storage part, of all nodes in more detail. (E.g. which nodes have which physical storage connected and what filesystem(s) do they use and, in case of ZFS, what their pool name(s) are.)
 
Last edited:
So, your ZFS pool Storage is physically connected and imported/mounted on your PVE node snl-proxmox01, but you restrict that storage in the PVE storage configuration to your node snl-proxmox02; instead of snl-proxmox01.

If there are further problems, you might want to specify your setup, especially the storage part, of all nodes in more detail. (E.g. which nodes have which physical storage connected and what filesystem(s) do they use and, in case of ZFS, what their pool name(s) are.)
Hey sorry, i just updated the post.

So I started off with 01, added the Thunderbolt raid to it physically, and through the webgui I added the ZFS as Storage (checkbox)
On the other 2 nodes: 02 and 03 I only added ZFS storage, with similar raid drives, but without checking the checkbox.
Each used the same pool name : Storage

Everything worked great, until I wanted to introduce 04 with internal SATA drives as raid (2x 4TB, just like the Graid ones)
After screwing around getting it as raid0 and recognised in proxmox, I ended up in current situation :( where 01 holds the vm disks on ZFS: Storage

How to restore my ZFS or in any other way, move the VM's to other storage to then delete ZFS and create it again to be able to move the VM's back and sync again
 
Last edited:
Just found this post: https://forum.proxmox.com/threads/zfs-shows-in-disks-menu-but-not-showing-in-storage-list.92542/
And I thought it won't break much...

So after adding it to Datacenter > Storage > Add > ZFS with same name.

Code:
root@snl-proxmox01:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

zfspool: Storage
    pool Storage
    content rootdir,images
    mountpoint /Storage
    sparse 0

And now at least backup and migrate is working/running now.

Everything seems to work just fine now again! Thanks
 
Last edited:
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!