Yes SLOG (see https://www.truenas.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/) . If you do so, use 2 SSD in mirroring for mitigating hardware failure. Warning, all SSD cannot manage SLOG load. Maybe it is not a good idea in your case...
Hello. For shared LVM over iSCSI, you can inspire from : https://infohub.delltechnologies.com/en-us/t/dell-powerstore-deploying-proxmox-virtual-environment-white-paper/
For speeding up NFS, be sure to tune network stack (TCP window size if NFS...
Looks like the conversation has moved into a bit of a different direction, which is completely fine. I wanted to address a few points for posterity.
It is advisable to start with a multipath device if you anticipate moving to that type of...
The speed benefit may be an illusion, try the nconnect setting on your NFS. That parallelizes your NFS connection (where you classically have the bottleneck of 1 read and 1 write stream).
Especially on containers you shouldn’t see much of a...
As @Impact said, use "Skip replication". But you should create the disk on the other nodes. Maybe the simpler is to replicate one time and disable replication after. And test, test, test :-)
Hello.
HDDs are big. Thus you should use error detection and correction (at RAID or FS level).
HDDs are low IOps.
Thus you should use data cache if you want to speed up things : either with RAID card cache (+ BBU) or with ZFS + zlog.
But NOT...
I already had this problem 10 years ago. I didn't looked forward.
Today, as a workaround, you can look at this : https://pve.proxmox.com/wiki/Automated_Installation
Hello, I have had this type of problem. That was openssh-server that stopped serving SSH client connections as security measure. You can look at the logs if ssh server is complaining maybe ?
My ideas :
Reliability : use several firewalls/routers, with VRRP protocol for example in order to have Active/Passive firewalls.
Security : use dedicated cluster servers with low level hardware in order to avoid lateral attackers moves.
You are...
Nodes shouldn't do any function other than virtualization ( and maybe CEPH storage ).
Thus one or several VMs should do routing / firewalling.
Be careful with your overlapping networks as it can be dangerous (security) and prone to mistakes in...
WARNING : YOU CAN LOOSE DATA, thus backup what you have to before doing this (example : VMs and /etc on each nodes)
In order to FORCE local node operations : pvecm expect 1
Then you will have to pay attention to what you are doing in /etc/pve as...
Yes, and I think that your corosync cluster speak on the management traffic. That's why I said it was expected. In order to avoid this, you should add a second ring for corosync on the other switch network.