ZFS "problem"

That mean i loose all my data?
It depends on how you do it. With enough free space you could copy every dataset/zvol one after another and delete the old one afterwards. That way you don't lose your data. Or you copy all data to another pool that can fit your 120TB, destroy everything and later move it back.
There are multiple ways to do this, but you will have to rewrite everything.
 
It depends on how you do it. With enough free space you could copy every dataset/zvol one after another and delete the old one afterwards. That way you don't lose your data. Or you copy all data to another pool that can fit your 120TB, destroy everything and later move it back.
There are multiple ways to do this, but you will have to rewrite everything.

I need to move this on a different pool, or it can be moved within the pool?
 
puh...i have 50TB free on my 120TB pool.
So this will take a few days?
Yes, probably.

Always better to spend some time researching before ordering hardware and start using a pool. That saves a lot of money and time.

I need to move this on a different pool, or it can be moved within the pool?
Could be the same pool. But needs to be a new dataset/zvol so it gets copied and not simply moved.
 
Last edited:
Yes, probably.

Always better to spend some time researching before ordering hardware and start using a pool. That saves a lot of money and time.


Could be the same pool. But needs to be a new dataset/zvol so it gets copied and not simply moved.

One thing for understanding.
I have mounted my pool unter /home/media
In this media folder I have this:
Folder1 - in this folder 20TB
Folder2 - in this folder 20TB
Folder3 - in this folder 20TB

Now I create the special disk. I can just copy each folder into a new folder in the same pool and after that, I can move it back? That’s it or is there a special command that I have to use?
 
I have mounted my pool unter /home/media
In this media folder I have this:
Folder1 - in this folder 20TB
Folder2 - in this folder 20TB
Folder3 - in this folder 20TB

Now I create the special disk. I can just copy each folder into a new folder in the same pool and after that, I can move it back? That’s it or is there a special command that I have to use?
You need to move them between Datasets. In case you didn't create datasets for each folder via CLI this wont help.
 
Reorganizing your pool and start working with datasets. Then moving files from your folders to your datasets.
You should really have a look at the ZFS basics and try to undestand the concepts like datasets/zvols/vdevs/recordsize/compression/replication/snapshots and so on to be able to make best use of your storage.
 
Last edited:
  • Like
Reactions: Kingneutron
Ok thank you.
I start adding the special device to the pool.
After that i creating datasets (now i know what it is :D).
After that i will move my folders into the datasets and check if special devices works properly.
 
I have created now raidz1 with 2 vdevs (2x4) and special device with 2x1TB (will add here 2x 1TB next week when new ssd arriving). And will add a extra vdev if my space run out.
For now i can tell, it's MUUUUUUUUCH better. Before that my disk io was all the time at 95-100%. Now at the same tasks it's at around 15-20%.
 
Last edited:
You should really have a look at the ZFS basics and try to undestand the concepts like datasets/zvols/vdevs/recordsize/compression/replication/snapshots and so on to be able to make best use of your storage.
This is very crucial. You defenitely want to have a lot of datasets to organize different data you store so that you have fine-granular control over snapshots.
 
Now im thinking of raidz2 or raid10. How about special devices for raid10?
Raid10 for performance and faster disk replacement and raidz2 if you care about your data and capacity. Striped raidz1 is somewhere in between.

how many i need minimum? 2?
2/4/6/... for a raid10 and 3/6/9/... for a raidz2. When a raidz2 allows you to lose any 2 HDDs it would be bad if you would only got two special device SSDs in a mirror so the whole pool would be lost as soon as the second SSD starts failing.
 
Last edited:
Hi,

i have a small problem with my ZFS pool. I have a 120TB Pool with HDDs and downloaded ISOs on it. I assigned 256GB of RAM to it and everything works fine all the RAM Cache is "full". After that i have a lot drops in download speed. When i restart the server and RAM cache is empty, the speed is full again.
Is there any solution for this?


a not so fancy way is to clear up you ZFS cache when you know you need them or per script. But then would hurt your read speed when reading you data from the pool and its not in the ram... But you can add NVME storage as read cache. I guess better than reading from HDD.
 
No need to delete any cache.
With this setting its running without any problems anymore.

Code:
Plex                                        132T  13.1T   119T        -         -     0%     9%  1.00x    ONLINE  -
  raidz1-0                                 65.5T  5.86T  59.6T        -         -     0%  8.95%      -    ONLINE
    sde                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdf                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdg                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdh                                    16.4T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                 65.5T  7.23T  58.2T        -         -     0%  11.0%      -    ONLINE
    sdi                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdj                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdk                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdl                                    16.4T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-2                                  888G  14.4G   874G        -         -     1%  1.62%      -    ONLINE
    sdn                                     894G      -      -        -         -      -      -      -    ONLINE
    sdo                                     894G      -      -        -         -      -      -      -    ONLINE
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!