Yes. If you restore on a compression-enabled dataset/zvol, it will work. Let's put it this way: if you have compressible data, there is no reason to not enable LZ4 pool-wide from the beginning.
Short answer: no. You will need RAM.
- your very small ARC will be killed by 940GB of L2ARC anyway
- search for arc_summary script and check your ARC hit ratio. I bet it is very very small.
- outside the VMs (29GB) you are left with 3GB for: OS (all kinds of buffers), ARC (for both data AND...
I really don't know.
I've set it at runtime but I don't know if it is really applied. Also, regarding performance, I also have no answer.
Like yourself, I'm trying to achieve a stable (and fast) ZFS on Linux platform.
A good advice is to not read too much the "Issues" section on Github...
Thanks. I've asked for those because it was a (closed now) issue regarding high CPU usage/lockups on almost full pools and/or compression (gzip), but these look fine.
Did you try the fix from 0.6.5.3 (not available in Proxmox yet)? https://github.com/zfsonlinux/spl/pull/484
I think "experimental" there should be understood as in "for test purposes" or "at home". The issue is that when you use a local resource (e.g. bind mounts or PCI-passthrough) will make that VM/container to stick to a specific node.
In terms of stability it is the plain old Linux bind mount, so...
I use ZFS overprovisioning with or without compression for years. No issues, except you don't want to fill the VM disks.
I do that to use a big disk template for VMs. Some use the big disk size, some don't, so they all fit inside the bigger storage pool.
When you start it first time after shutdown it will fail because the network interface is lingering (read your log) . If you try again it will start because the interface has been cleared.
If it did not start on 2nd attempt then you need to post the log again.
Oops, there are two errors there...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.