DMARC etc need to point to server, which is sending the mail, so mailservers. There is no cost to having PMG in those records too anyway.
For certificates, the right way is such that works in long run.
You can do tests. From my point of view mixing 1500/9k mtu on same interface is calling for problems. I tried something as this before even ceph was in pve and it was mess.
Network latency will have higher performance impact than 9k mtu.
For OP:
4. Rook or any configuration management solutions
Anyway, we are calculation external Ceph storage for our PVE too.
PVE Staff, are there some requirements for external clusters? For example, version difference etc? PVE documentation handles hyperconverged/updates mainly.
Previous image is zabbix standard disk template.
Even netdata shows something crazy for VM system disk:
And VM db data disk:
Both disks are on this PVE host, dedicated raid for VM images:
PVE OS raid diskset:
All VMs are Debian 10. Upgrade was done only on PVE host level.
Hi,
we upgraded our PVE cluster (very old HP G7 and 3yr old Dell R940) 6.2 to 6.4 and disk utilization in VMs raised from floor.
This problem is same for VMs on:
- nfs ssd storage (raw files), default (no cache)
- local ssd disks (LVM thick), default (no cache)
The change depends on VM...
Hi,
one of my cluster nodes hard failed due failed disks in raid. Now because cluster is 6.2, we decided to upgrade to 6.4 (needed req for 7). Reinstalled node will have same fqdn as failed node. Now i have two possible ways:
1] remove failed node from cluster (aka cleanup) and add reinstalled...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.