VPS Hosting providers - why no zfs ?

@bbgeek17 When I look at your product page for PVE integration [1], it says "At-Rest & In-Flight Encryption" with key features, then specifically that emphasizes it integrates with PVE since v6 without patching. If I understood it it's fully managed solution that acts essentially as iSCSI target. How do you get that in-flight traffic encrypted without patching anything? IPsec?

[1] https://www.blockbridge.com/proxmox
 
you can tune zfs - default settings not make sense for pve if you have nvme disks
/etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=1073741824
options zfs zfs_arc_max=1073741824
options zfs l2arc_noprefetch=1

update-initramfs -u
 
Thanks @bbgeek17, whatever you said is true but this is also true that there are all kind of market and then there are customer for that.

Unfortunately, this discussion is not about potential market and customers of non-aws / azure like hosts. I need to clarify, I am just managing servers, most I can do is suggest most suitable long term solution within desired resource limit given by client.

Hence coming back to question, what would be optimal solution here in such case ?


Further,
Since this is a discussion on mdadm, ZFS, etc. I will also drop in this one that came to mind re SWAP on zvols:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199189
Does this bug apply to swap for host node or it also applies to guest VMs over zvols as their swap is residing on zvol only although guest OS doesn't see this way ?
 
you can tune zfs - default settings not make sense for pve if you have nvme disks
/etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=1073741824
options zfs zfs_arc_max=1073741824
options zfs l2arc_noprefetch=1

update-initramfs -u

I will get flamed for this, but ZFS was not really built for NVMe and one can tell.

Attempts such as these are a testament to just that:
https://github.com/openzfs/zfs/pull/10018
 
you can tune zfs - default settings not make sense for pve if you have nvme disks
/etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=1073741824
options zfs zfs_arc_max=1073741824
options zfs l2arc_noprefetch=1

update-initramfs -u
Yes, we can limit arc, could share your personal experience and what exactly `l2arc_noprefetch=1` does ?
 
Does this bug apply to swap for host node or it also applies to guest VMs over zvols as their swap is residing on zvol only although guest OS doesn't see this way ?

So when you think of what happens there, when you fill up RAM and SWAP is to be used heavily, you get the deadlock situation as the pages are to be evicted. The ZFS runs on the hardware of the host, so if your VM just happens to have been allocated some tiny portion of RAM (with KSM as well), then even if you fill that up and the VM starts using its own SWAP, you are not going to run into the same as described in the first sentence. But maybe I do not know something.
 
tbh, I have learned from bitter experiences in past that only for HDDs based array, go for zfs. Freenas etc. uses zfs as they know most of the cluster will be hdd based/ spinning disks.

I think ZFS is great, but also using it for HDD storage pools. I do not see even benefit in the way PVE ISO install does it where even the OS is on ZFS. And initramfs is shoved into EFI partition.
 
So when you think of what happens there, when you fill up RAM and SWAP is to be used heavily, you get the deadlock situation as the pages are to be evicted. The ZFS runs on the hardware of the host, so if your VM just happens to have been allocated some tiny portion of RAM (with KSM as well), then even if you fill that up and the VM starts using its own SWAP, you are not going to run into the same as described in the first sentence. But maybe I do not know something.

Talking of KSM ... no, you probably should not use it ... either.

https://github.com/openzfs/zfs/issues/12813
 
@bbgeek17 When I look at your product page for PVE integration [1], it says "At-Rest & In-Flight Encryption" with key features, then specifically that emphasizes it integrates with PVE since v6 without patching. If I understood it it's fully managed solution that acts essentially as iSCSI target. How do you get that in-flight traffic encrypted without patching anything? IPsec?

[1] https://www.blockbridge.com/proxmox
We do support IPsec in general, however for Proxmox we used our iSCSI/TLS implementation for seamless integration



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: esi_y
Yes, we can limit arc, could share your personal experience and what exactly `l2arc_noprefetch=1` does ?
l2arc_noprefetch automatically disabled on servers with small memory, its make sense on hdd storages for small files, but not for fastest nvme drives

ARC min and max should be not equal, max should be min+1, or min = max-1
Otherwise it will not limit
you miss

# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=1073741824
options zfs zfs_arc_max=1073741824
options zfs l2arc_noprefetch=1

# arc_summary

ARC size (current): 99.8 % 1022.0 MiB
Target size (adaptive): 100.0 % 1.0 GiB
Min size (hard limit): 100.0 % 1.0 GiB
Max size (high water): 1:1 1.0 GiB
 
I belive this discussion has been inconclusive. As have been doing, for HDD arra, will use zfs with arc limits and for nvmes / ssds, will go for soft raid with lvm thin. Ofcourse is there is need of centralized storage, will consider ceph.
 
Its only inconclusive because I think you were looking for an answer to a different question. Is it your intention to offer such a product to your customers? if so, you should ask them- not other providers since they may not be targeting the same customer. Honestly, I dont know who the customer is for a vps without HA, or whether the customer has any interest at all at the underlying technology providing said service.

If you're the customer- why is the underlying technology important at all? I imagine you'd want it to perform, not go down, and not eat your data without care of how... its really up to the vps vendor to convince you that their product does that.
 
@alexskysilk, customer has no interest in underlying technology as long as it works. I tried to ask few providers, they are not really open to share their experience but what I have seen is that they don't have zfs.

I never said I am customer, nor I am offering anything, I managing things, hence I came here to learn something from others experience though I have already have personal experience but still asking someone who is doing it already is good idea and whole point of plateform like forums.
 
Thanks @alexskysilk, I not interested in zfs only, I am more interested to know if local storage has to be used then which one is good ?
Looks like CoW fs is not good for faster nvmes and ssds.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!