if you want to use hetzner in bridged mode (personnaly, I don't recommanded it, because their network is shitty and flooding crap between differents customers), you need to use mac address provided by hetzner (should be configurable in their portal I think), and configure them in vms nic mac...
do you have some kind of transparent proxy on your network ???
because you got a certificate from "subject: C=KR; ST=Seoul; O=ONYCOM, INC.; CN=*.imqa.io"
128pg for 24 osd is really too low. It should be around 1024 (for a replication size=3).
Note that the pg autoscaler should increase it, but if your cluster is empty (and that you only do bench), the autoscaler can even reduce it so a minimum number of pg (something like 32).
Try to set pg...
I'm trying to upstream all my work as much as possible. (because I won't to maintain a fork on my side).
I have a lot of customers coming from vmware (thanks broadcom), with existing SAN, and it's really a blocker currently.
interesting, I didn't have thinked about this. I'll try to implement...
for backup, try to enable the need fleecing backup option.
without it, if your backup storage is slow, it can slowdown your production vm when backup is running. (because if your vm os try to write on a block not yet backuped, it need to be send first to the backup storage).
The fleecing...
if you do change in the gui, proxmox write the new config in /etc/network/interfaces.new
and replace /etc/network/interfaces at boot or when your reload the network in the gui
the agent is only used for some operations, like shutdown through agent (instead api) or snapshots/backup fsfreeze at start , or to get some infos like ip address in the vm.
The drivers are not related. (they are native in linux kernel and on windows, you need to install the drivers yourself...
you can have a look at cumulus linux documentation (it's an hardware switch using linux bridge with ifupdown2, exactly like proxmox)
https://docs.nvidia.com/networking-ethernet-software/cumulus-linux-42/Layer-2/Ethernet-Bridging-VLANs/VLAN-aware-Bridge-Mode/
auto vmbr1
iface vmbr1 inet...
https://pve.proxmox.com/wiki/High_Availability#ha_manager_node_maintenance
you need to put the node in maintenance before doing your upgrade, it'll migrate the vms out the node, then disable the watchdog.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.