Search results for query: raidz padding

  1. L

    All ZFS Pools Showing as 97% Full in PVE

    Yes, the single ZFS is the archive drive. pbs (thinkserver) is the main backup ZFS mirror (thank you for the correction). And the RAIDZ2 (again, thank you) is the remote backup storage external (trashcan). I should've explained my setup earlier, but here it is: Main server (thinkserver) Lenovo...
  2. I

    All ZFS Pools Showing as 97% Full in PVE

    Is that the archive one? Which one is that? Also there is no RAID1 in ZFS, do you mean mirror? No RAID6 in ZFS, do you mean RAIDZ2? So I don't fully understand your current setup, what is where, and what is the problem. So I will make a wild guesses instead. Correct me if I'm wrong. - You...
  3. K

    Langsame 4k Performance in Windows 11 VM

    Ich würde dir vom umstellen auf vollblocksize=4k abraten. Bei RAIDZ1 mit ashift 12 (Default), würdest du nur 50% der Kapazität verwenden können. RAIDZ ist tricky und etwas komplizierter zu verstehen. Da kommt Padding mit ins Spiel. Bei 16K vollblocksize kannst du die vollen 66% Kapazität nutzen...
  4. A

    Best RAID for ZFS in Small Cluster?

    If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time) performance= striped mirrors. full stop. if you wish to sacrifice some performance...
  5. A

    Best RAID for ZFS in Small Cluster?

    I get that its not great for VMs but the IT here used to have the servers on the baremetal with no redundancy for the data. I went with proxmox and RIAD Z1 because of redundancy. I can do hardware RAID as all the hosts are able to but went with software (Proxmox) RAID out of convenience of doing...
  6. leesteken

    Best RAID for ZFS in Small Cluster?

    RAID5 (with BBU) is almost, but not quite, entirely unlike RAIDz1 (with drives without PLP). RAIDz1/2/3 is not good for VMs but can give you more usable storage (but padding can be terrible for space): https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/#post-734639
  7. I

    Fragen zu Ext4 vs ZFS

    Die IMHO bessere Frage wäre: "Hat ZFS einen nennenswert grösseren wearout als ext4?" Meine SSDs halten nach aktueller Schätzung 18 und 34 Jahre bis zum wearout. Hätte ich mit ext4 weniger wearout, weil ich mit ZIL doppelt schreibe und weil ich manchmal write amplification habe bei 4k writes in...
  8. I

    Homeserver Rebuild

    Nöö. Deaktiverter SWAP oder ZRAM ist zwar besser, aber noch lange kein Grund für den Kill einer SSD. Eigentlich haben wir hier ja eine seltsame Beweislastumkehr. Du behauptest ohne Begründung, ZFS würde für wearout sorgen. Nicht ich muss dir Begründen, warum das ein Märchen ist, du musst uns...
  9. T

    Win11 Festplatte: gelöschte Bereiche werden nicht freigegeben

    Das sehe ich anders. Sichern kann man dank Deduplizierung auch dreistellige TB-Mengen sehr effektiv und schnell (bei Proxmox eben z.B. mit einem PBS). Erst wenn man fette Bewegungsdaten sichern muss, wird das eklig. Wie eklig hängt von der Verbindung ab. Beim Rückspielen braucht man natürlich...
  10. I

    Win11 Festplatte: gelöschte Bereiche werden nicht freigegeben

    Gibt noch mehr, aber dies sind die zwei IMHO wichtigsten Punkte: A: Du kannst eine VM praktisch nicht mehr vernünftig mit Hausmitteln backupen, wenn du eine 16TB Windows & Plex VM hast. B: blockstorage hat per default eine statische volblocksize von 16k. Datasets haben einen upper limit...
  11. UdoB

    ZFS storage is very full, we would like to increase the space but...

    Yeah, that's no secret :-) Unfortunately this does not mean, that I am a ZFS expert..., there are too many internal details I do not understand. Well, the most important point is that there is no automatic re-balancing. Already stored data will stay where it is. (For datasets there is "man...
  12. C

    ZFS storage is very full, we would like to increase the space but...

    Hi, actually the thread has become too technical for my current knowledge, but it was still a pleasure to read it even though I understood very little. On this Proxmox server, we have a Windows server that acts as a file server and about 5 VMs that act as clients. We would like to increase the...
  13. I

    ZFS storage is very full, we would like to increase the space but...

    That question is an oversimplification ;) But most likely, it would rise of course. Since a lets say from 128k down to 100k compressed block can make use of wider stripes and easier fit the pool geometry, padding has less impact and so on. But nobody would use 128k for VMs. We have a saying in...
  14. A

    ZFS storage is very full, we would like to increase the space but...

    Most likely yes. It tooks me a second to read the above discussion when I realized they're no longer speaking in terms of your question. The answer to your question depends on other factors you didnt mention, namely: 1. are you trying to improve virtual machine performance? if not, bigger...
  15. M

    ZFS storage is very full, we would like to increase the space but...

    Your clarifications are helpful, and I agree on several important points: We are talking about zvols, not datasets. Zvols have a static volblocksize, and unlike datasets they cannot coalesce multiple logical blocks into a single larger record. TXG batching does not change the on-disk RAIDZ...
  16. I

    ZFS storage is very full, we would like to increase the space but...

    That is not what it does. I unrealistically assumes that a 16k volblock is not compressable and because of that always a 16k write. It does not assume every write to be a tiny 4k block. It assumes a 4k sector size. I did not know about that. Do you have a link for that? Are you sure you are not...
  17. I

    ZFS storage is very full, we would like to increase the space but...

    Sorry, I misunderstood. In that case you can leave it. Sure. So Proxmox uses the good default of 16k volblocksize. That means that all your VMs ZFS raw disks, are offered 16k blocks. Now lets look how ZFS provides 16k blocks. You have RAIDZ1. The sector size of your SSDs is 4k. So each 16k...
  18. N

    MSA 2060 SAN FC with single server (no shared access)

    Thanks for the reply! I just didn't quite understand one thing: If I'm able to switch the disks to JBOD / IT / Passthrough mode, and I have, say, 8 of them — how should I organize them in ZFS, if you're advising against creating a RAIDz array? (As I understand it, RAIDz is a software array that...
  19. VictorSTS

    MSA 2060 SAN FC with single server (no shared access)

    PVE will let you choose ZFS even if you have raid, but you will be running a fully unsupported configuration and when issues arise no one will be able to/willing to provide support. Check your hardware: many SmartArray allow to either change their personality to JBOD / IT / Passthrough mode or...
  20. P

    ZFS/ProxMox Disk Size Weirdness

    I think I see what you're saying. I'm new to ZFS and trying to learn about the things you are mentioning.