There is no need to fix an issue in general, when the issue is only relevant to a specific case.
According to you it is not possible to fix it in general because some backends do not support snapshots. And not all backends can do this "easily", even if they do support.
This is why there are...
your best bet would be to implement it yourself, using two or more KVM virtual machines and threating them as physicals. Then use proxmox to migrate those KVM on physical nodes and abstract yourself from the underlying hardware.
This is how I'm doing it and works treats.
You should use your warranty with supermicro. There's nothing zfs could do to break the disks. Why do you think that it was shift in the first place ? Why do you think than NTFS, Ext4 or other filesystems will write in anything but 4k sectors ?
And BTW, SSD have physical block size ranging in...
Fabian, are you talking about the snapshots seen in proxmox or the filesystem snapshots ?
And, how do you create such a situation, with multiple mount points on the same LXC ? Currently the gui will not let you do this.
Also: why one would want to support all X * Y combinations ? I see no...
The point is: mp0 or other options are not handled correctly. Bin mount is just an example. And, there's a reason for using a single nfs client for a single nfs export. Resource-wise it makes a hell of a lot of sense.
ashift=12 is standard on zfsonlinux. It's there for a good reason and a design change that was not "light headed".
It has nothing to do with SSD failing. Perhaps you can tell us more on what SSD you're using and what type of workload you have on them.
When you say failed, what do you mean ...
Ok, I get you. We're talking pears and apples.
In the case of ZFS to LVM, we're converting storage backends. It would be expected that not all features can be "converted" or supported on all backends. This is normal and can be documented. In this specific case Storage.pm would not use "zfs...
I don't follow you. If you move a container from ZFS to ZFS, you use "zfs send". This is hard-coded in Storage.pm.
What I'm asking is to use "zfs send -R" instead.
Please give an example of a situation that will not work.
Thank you.
PS: Just to clarify, we're not talking about the snapshots...
You're saying it's a design decision, however evidence is there this cannot be correct.
In Storage.pm there's an explicit check for zfs backend and a branch to take advanted of the zfs features in a non-compatible way for other storages.
Specifically, Proxmox uses the command "zfs send" to...
There's a bug (bad feature) when migrating LXC containers hosted on ZFS: It looses the snapshots.
Longer explanation: I snapshot routinely all LXC containers for backup and replication. This is "a goog thing" and saved my azz a few times over the years.
I discovered that proxmox will migrate...
I have an update, using the Intel DC S3500 SSD. They're both capable of 5'000 iops (fio sync write, 4k, 4 jobs). With 16 jobs they go up to 10'000 iops.
Now... pveperf still sucks:
CPU BOGOMIPS: 57529.56
REGEX/SECOND: 2205679
HD SIZE: 1528.62 GB (rpool/t)
FSYNCS/SECOND...
The IO wait is not bad on average. However, the server is "empty" with just two VM right now. I'm concerned about the disk io when the full load is applied.
Here's the first server. 6x2.5" SATA 7.2K 1TB "enterprise sata"
Here's a second server with only 4x3.5" SATA 7.2K (WD Black 1TB)
And...
I'm using writeback and writetrough. It's windows and linux guests.
I also have samba shares serving home directories to windows domain users.
I know, the server is doing a lot of things, still I'm trying to dimension it properly and understand why the disks are underperforming.
I'm waiting...
What would be your suggestion for (consumer) or cheap enterprise drive ?
I'm not sure about your numbers tough.
The MX200 has abour 5000 IOPS with 256 jobs and 950 with 8 jobs (fio test):
--direct=1 --sync=1 --rw=write --bs=4k --numjobs=8 --iodepth=1 --runtime=60 --time_based...
Well, I use(d) samsung 850 pro (128GB) + Crucial MX200.
Now I have a Crucial MX200 (256GB) + a OCZ Vertex 150 (indilix). Both SSD can handle 400MB+ writes and 60'000 IOPS during write (benchmarked using atto).
The MX200 was suggested to me on IRC (I don't remember whether the #zfsonlinux or...
Hello,
I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me.
The problem is the IOPS I can get from a ZFS pool with 6...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.