In the servers BIOS there is a setting controlling the way the server behaves when power is lost and power is returned. Normally there are three settings: Start when power comes back (sounds like your case since the UPS supplies sufficient power), stay powered off, or return to the previous mode...
With a SAN there is no need for snapshot and thin provision since this can be done on the SAN side. And as a side note: When using a SAN you should do your storage administration on the SAN side using whatever tools/gui the SAN provides since the SAN knows way more about the storage than Proxmox.
What about option --threads=0..200? Any benchmarks performed adjusting the number of threds used?
There is also this option which could reduce the archive size further and increase decompression time:
--[no-]sparse
Something must be broken in the FreeNAS-API. Try asking the developer of the FreeNAS-API what is wrong. Using ZFS over iSCSI with Comstar works flawlessly here.
As always using this with fio gives you a comparable test:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw...
This is true. As soon as you have applied a disk to a vm then the content of the storage shifts from 'none' to 'images' in which case the storage will show as available for 'images'
It is this feature: https://www.illumos.org/issues/1701 which can be found in this commit: https://github.com/openzfs/zfs/commit/1b939560be5c51deecf875af9dada9d094633bf7
But as the documentation states this feature will only be available for SSD's so if the pool does not contain SSD's this...
Another thing to take into consideration when choosing between (stripped) mirrors or (stripped) raidz[x] is the overhead in calculating parity for raidz[x] and especially when doing resilvering of the pool. Calculating parity tend to require a CPU with higher clock speeds since parity...
Use stackable switches and create a LACP with connections to more than one switch (obviously the storage box should likewise have connections to more than one switch) and you should be failure proof.
Yes, it uses the same migration features as any other supported storage in Proxmox.
Replication is handled on the storage server, not by Proxmox.
What to you mean by 'shared storage controller fails over to another controller'?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.