ok, that make sense.
the behavior you describe suggests you have problems with your hardware. you can keep an eye on dmesg to see what messages pop up during IO, which should give some indication. A smart test on your drives is probably in order...
zfs doesnt have to be a poor fit for database- the default settings are simply not tuned for it. see https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#database-workloads...
Sounds like you have issues beyond the disk subsystem. guest type, virtual HW configuration, and drivers can all play a part.
I'd begin troubleshooting on the host end. What kind of performance do you get with similar IO issues by the host...
HAH! I never said or suggested open source; open source is great and many valid and valuable applications exist that are, but storage is special.
Storage requires ENORMOUS engineering effort, post deployment support, and continuous improvement...
google high availability storage. There are many options available.
resouces arent relevant in and of themselves. Storage solutions can be very low to very high powered, with different class of disk, tiering and caching methods, and with varying...
Not necessarily. dual (or multi) controller NAS is a thing.
Repeat after me. Replication isnt High Availability. High availability necessarily requires REAL TIME availability, which replication cannot provide.
"But in my use case, the data...
Any form of shared storage. NFS, iSCSI, etc.
see above.
zfs is host specific. if it becomes unavailable, what would be the point of fencing the node in the first place?
As you can see, your datastore and iso_data pools use the same disks. the total capacity consumed for BOTH is 4.5TB raw (~1.5TB used.) you ISOs only take 137GB as you expect.
A couple of reasons this isnt actually so.
1. The store in question was limited to a single node, and would not have affected anything in a cluster- eg, no other cluster node has access to this store- a zfs store is NOT HA BY DEFINITION.
2. any...
You shouldnt be. there are a lot more people (especially on these forums) that equate NAS with Synology and not Netapp. and fewer know why Netapp costs so much more.
@dmca7919 if serious, NAS is an excellent option for HA virtualization storage...
Having the datastore mounted on two separate clusters can be really dangerous, since the clusters dont share file locking information. That said, if you are able to make sure they dont try to consume the same resources at the same time, what you...
Why is everyone using passive voice when recounting that "consumer drives are discouraged?" by whom? and under what circumstances? There is nothing INHERENTLY wrong with using non enterprise drives for zfs pools, as long as you understand the...
We did a deep dive on this feature in this article: https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/
https://forum.proxmox.com/threads/inside-proxmox-ve-9-san-snapshot-support.169675
Blockbridge : Ultra low latency all-NVME...
a couple of thing:
1. if you do wish to deploy lvm on mdadm, you can. it just requires a bit of "linux" setup. but you shouldnt, because
2. a zfs/btrfs mirror is seamless, has filesystem integration, inline compression, snapshots, etc. it is a...
If memory serves, Windows XP allows a repair install. just boot the windows xp iso (sata/ide host bus for the existing virtual disk) and reinstall.
Its also possible to use universal recovery from Acronis to reset an install using new hardware.
since you're in a cluster, the conversation should expand to include provisioning in general.
measure your total provisioned RAM across your entire cluster. assuming your cluster nodes are identical and have no additional host load (eg...