Hi Guys,
Soon I want to build a Low-Power Server with one of those Erying Motherboards. Since I live in Germany and Energy is expensive here, I'm kind of compromised in the Storage Setup. This Setup is only used for an SMB Share that's connected via a 10Gig SFP+ Card. I will add a list at the bottom what drives are available. My Plan would be to add 1–3 HDDs with a ZIL and a ZFS Special Device on a NVMe (2 partitions, 512 GB NVMe). Since the list of drives I could use is very long, my plan is to spin those drives down. For Example: Setup: 3×1 TB Raid Z1 + 512 GB NVMe as ZIL and a ZFS Special Device: I write to the cache only and once a Day I move the Cache onto the Drives, and they spin up once a day or so or until the Cache is full, or they would have to spin up otherwise. With that set up, my Drives wouldn't wear that much. Also, my IOPS would be lots higher. What I noticed now is: Wouldn't it be much more practical to just write to the NVMe and have a Replication Task for the Drive Pool: e.g.: SMB Share Data gets written to NVMe and gets moved Nightly (To ZFS RaidZ1 Array, and deleted on the NVMe). With that, I would have a limitation to 512 GB, but that should be okay. Problem with that is: What do I do when I want to access old data that's already on the RaidZ1 Array? In that case they would have to spin up again right? What Setup would you use? My plan would be to spin down those drives so they only spin up 3-5 times a day since they consume 3-8W each.
I have the following Drives:
256 GB NVMe (Used for Boot and VM Storage)
3x 1 TB HDD (I could use 1-3 of those in a RaidZ1)
512 GB NVMe (Used for ZIL and ZFS Special Device)
2TB HDD (1st Backup, I have 2 more Backups)
500 GB SATA SSD (Could be used)
2x 500 GB SATA HDDs (I could use those, but I don't want to add too many drives)
Soon I want to build a Low-Power Server with one of those Erying Motherboards. Since I live in Germany and Energy is expensive here, I'm kind of compromised in the Storage Setup. This Setup is only used for an SMB Share that's connected via a 10Gig SFP+ Card. I will add a list at the bottom what drives are available. My Plan would be to add 1–3 HDDs with a ZIL and a ZFS Special Device on a NVMe (2 partitions, 512 GB NVMe). Since the list of drives I could use is very long, my plan is to spin those drives down. For Example: Setup: 3×1 TB Raid Z1 + 512 GB NVMe as ZIL and a ZFS Special Device: I write to the cache only and once a Day I move the Cache onto the Drives, and they spin up once a day or so or until the Cache is full, or they would have to spin up otherwise. With that set up, my Drives wouldn't wear that much. Also, my IOPS would be lots higher. What I noticed now is: Wouldn't it be much more practical to just write to the NVMe and have a Replication Task for the Drive Pool: e.g.: SMB Share Data gets written to NVMe and gets moved Nightly (To ZFS RaidZ1 Array, and deleted on the NVMe). With that, I would have a limitation to 512 GB, but that should be okay. Problem with that is: What do I do when I want to access old data that's already on the RaidZ1 Array? In that case they would have to spin up again right? What Setup would you use? My plan would be to spin down those drives so they only spin up 3-5 times a day since they consume 3-8W each.
I have the following Drives:
256 GB NVMe (Used for Boot and VM Storage)
3x 1 TB HDD (I could use 1-3 of those in a RaidZ1)
512 GB NVMe (Used for ZIL and ZFS Special Device)
2TB HDD (1st Backup, I have 2 more Backups)
500 GB SATA SSD (Could be used)
2x 500 GB SATA HDDs (I could use those, but I don't want to add too many drives)
Last edited: