Storage explanation help

GDumanov

New Member
Dec 16, 2024
3
0
1
Hello,

This is my first time working with Proxmox, and I can't explain the following issue. How come both 'local' and 'local-zfs' are approximately the same size?

The configuration is 3x256 SSDs and 3x4TB HDDs. My idea is to use the SSDs in a Raidz-1 setup for Proxmox and VMs, and the HDDs in a Raidz-1 setup for storage.

I think that if I have installed Proxmox correctly and understood it properly, 'local' and 'local-zfs' should be around ~500GB in total.

Is there a way to figure out which partition is using which disks?

Thank you.
 

Attachments

  • HDD5.jpg
    HDD5.jpg
    32.8 KB · Views: 1
  • rpool.jpg
    rpool.jpg
    37 KB · Views: 1
  • Storage.jpg
    Storage.jpg
    102.2 KB · Views: 1
  • HDD5-Status.png
    HDD5-Status.png
    25 KB · Views: 1
  • local-status.png
    local-status.png
    29.4 KB · Views: 1
  • local-zfs-status.png
    local-zfs-status.png
    29.1 KB · Views: 1
This is my first time working with Proxmox, and I can't explain the following issue. How come both 'local' and 'local-zfs' are approximately the same size?
Please see this thread: https://forum.proxmox.com/threads/zfs-with-a-single-disk-and-a-strange-volume.157989/post-723737

The configuration is 3x256 SSDs and 3x4TB HDDs. My idea is to use the SSDs in a Raidz-1 setup for Proxmox and VMs, and the HDDs in a Raidz-1 setup for storage.

I think that if I have installed Proxmox correctly and understood it properly, 'local' and 'local-zfs' should be around ~500GB in total.

Is there a way to figure out which partition is using which disks?
They are on the same. The other 3 drives are probably not used yet. Check with zpool status for example.

Please also be aware that RAIDz1 works completely different from hardware RAID5 with BBU, and the available space and performance will disappoint, especially for VMs (which has been discussed many times before on this forum).
 
This is my zpool status
Code:
root@pve:~# zpool status
  pool: HDD5
 state: ONLINE
config:

        NAME                                 STATE     READ WRITE CKSUM
        HDD5                                 ONLINE       0     0     0
          raidz1-0                           ONLINE       0     0     0
            ata-ST4000VX016-3CV104_WW64YKVX  ONLINE       0     0     0
            ata-ST4000VX016-3CV104_WW64YKTN  ONLINE       0     0     0
            ata-ST4000VX016-3CV104_WW62ASWT  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

        NAME                                                          STATE     READ WRITE CKSUM
        rpool                                                         ONLINE       0     0     0
          raidz1-0                                                    ONLINE       0     0     0
            ata-SPCC_Solid_State_Disk_AA000000000000007940-part3      ONLINE       0     0     0
            ata-SK_hynix_SC300_2.5_7MM_256GB_FJ5CN42291110C524-part3  ONLINE       0     0     0
            ata-MTFDDAK256TDL-1AW1ZABFA_1947252BE2EA-part3            ONLINE       0     0     0

errors: No known data errors


They are on the same.
What do you mean by that?


I have a suspicion/concern that one of the two (local or local-zfs) is a part of an HDD RaidZ-1.

Regards
 
Last edited:
Yes, miss communication sorry.


They are both on the rpool and share storage space. EDIT: Unless you changes the local and/or local-zfs storage, but then you would know.
What do you suggest as a solution to the performance issue? Should I not use 3xSSD in RaidZ-1?
If not, how should I set them up to have some level of redundancy?


Excuse me, but I'm a total newbie.
 
What do you suggest as a solution to the performance issue? Should I not use 3xSSD in RaidZ-1?
If not, how should I set them up to have some level of redundancy?

Excuse me, but I'm a total newbie.
For VMs, you probably want to optimize for (random) IOPS instead of sequential throughput (and compression), and align the volblocksize and ashift with the block size used by the operating systems and software inside the VMs, to minimize write amplification. ZFS is not specific to Proxmox and there is a lot of information on the internet and on this forum also.
 
  • Like
Reactions: GDumanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!