By mistake deleted ZFS storage

janhw

New Member
Jul 23, 2019
14
0
1
35
Hello,
I really hope somebody can help me. I created a test-storage on my pool and by mistake deleted my "productive" zfs-storage which all my VMs HDDs are stored on. I can still access the VMs and one is even downloading quite a lot of data right now, is there a way I can undo this :/ ?

Best regards,

Jan
 
I just checked again, zfs -list gives me the following output:
Code:
rpool                 950G  4.19T   151K  /rpool
rpool/ROOT           5.06G  4.19T   140K  /rpool/ROOT
rpool/ROOT/pve-1     5.06G  4.19T  5.06G  /
rpool/data            140K  4.19T   140K  /rpool/data
rpool/swap           4.25G  4.19T   733M  -
rpool/vm-100-disk-0  51.6G  4.23T  6.41G  -
rpool/vm-101-disk-0  33.0G  4.21T  8.93G  -
rpool/vm-102-disk-0   516G  4.52T   171G  -
rpool/vm-103-disk-0   309G  4.40T  91.9G  -
rpool/vm-104-disk-0  30.9G  4.21T  5.24G  -

I gave the storage a name and directly put it under "rpool" . Does this mean I simply can re-create it?
Sorry for those noob-questions, just nearly had a heart-attack and I'm new to ZFS.
 
Hi Alex,
thanks for your reply but the command just gives me "No pools available to import". But All VMs are still working and in the VM-Config I can still see the removed mount-point of the hdd like FORMER-NAME:vm-104-disk-0
and as stated above I can still see the image on the pool. Do you know what that means for me?
Best regards,

Jan
 
Under DataCenter -> Storage
I deleted the ZFS-storage

Here are the Outputs:

lsblk:
Code:
# lsblk
NAME     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda        8:0    0  1.8T  0 disk
├─sda1     8:1    0 1007K  0 part
├─sda2     8:2    0  1.8T  0 part
└─sda9     8:9    0    8M  0 part
sdb        8:16   0  1.8T  0 disk
├─sdb1     8:17   0 1007K  0 part
├─sdb2     8:18   0  1.8T  0 part
└─sdb9     8:25   0    8M  0 part
sdc        8:32   0  1.8T  0 disk
├─sdc1     8:33   0 1007K  0 part
├─sdc2     8:34   0  1.8T  0 part
└─sdc9     8:41   0    8M  0 part
sdd        8:48   0  1.8T  0 disk
├─sdd1     8:49   0 1007K  0 part
├─sdd2     8:50   0  1.8T  0 part
└─sdd9     8:57   0    8M  0 part
zd0      230:0    0    4G  0 disk [SWAP]
zd16     230:16   0   50G  0 disk
├─zd16p1 230:17   0  350M  0 part
├─zd16p2 230:18   0    4G  0 part
├─zd16p3 230:19   0  1.5G  0 part
├─zd16p4 230:20   0    1K  0 part
├─zd16p5 230:21   0 16.1G  0 part
├─zd16p6 230:22   0  5.4G  0 part
├─zd16p7 230:23   0 21.1G  0 part
└─zd16p8 230:24   0  1.2G  0 part
zd32     230:32   0   32G  0 disk
└─zd32p1 230:33   0   32G  0 part
zd48     230:48   0  500G  0 disk
zd64     230:64   0  300G  0 disk
zd80     230:80   0   30G  0 disk

zpool status:
Code:
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sda2    ONLINE       0     0     0
        sdb2    ONLINE       0     0     0
        sdc2    ONLINE       0     0     0
        sdd2    ONLINE       0     0     0

errors: No known data errors
 
everything looks good. all your disks are accounted for and your pool is intact. If you deleted a dataset, eg rpool/productive, it is gone- but your disks are clearly not in that dataset, they're on rpool directly.
 
Hi Alex,
thanks for that great help! I created the storage in DataCenter -> Storage -> ZFS and used this storage to store the VM's HDDs.
Is this the performant way of using ZFS? Read about some issues with RAW and QCow and ZVOLs which was a bit confusing for a beginner.

Regards,

Jan
 
Is this the performant way of using ZFS?
Not for this use case. Youd get better results from a striped mirror. I'm also guessing your disks are 2TB spinners which will result in pretty poor performance generally.

Read about some issues with RAW and QCow and ZVOLs
Cant respond without specifics, but zfs and qcow are effectively exclusive, which means you are either using zfs OR qcow on a non cow FS.
 
Not for this use case. Youd get better results from a striped mirror. I'm also guessing your disks are 2TB spinners which will result in pretty poor performance generally.
Is there a way I can change this? You are correct, 4 x 2 TB, they are HGST HUS726020ALA610.
But from my feeling, the speed is quite okay, maybe I'll run some benchmark.
 
as your rpool is your root file system, changing it now would require a complete reinstall.

what I would suggest for you is to deploy an additional 2 disks (SSDs) to house your roof file system and more performance critical vms; the rest of your slow disks can be redeployed for file sharing duties plus vm disk space for less performance sensitive vms. the SSDs dont have to be large (or expensive.)
 
  • Like
Reactions: janhw

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!