In my case I had two problems:
I had assigned the server port to a unique zone that didn't have the correct routing/firewall settings configured.
The port to the gateway was assigned an Ethernet Port Profile that didn't allow tagged VLANs.
Once...
I admit I'm not very experienced with Proxmox' networking, so at the moment I'm just comparing your config with the similar one at
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_bond
- section "Example: Use a bond as the...
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory'...
The result of cat /etc/network/interfaces
would give more details :)
Of course not as a screenshot, but as text in the CODE tags (using this </> button above).
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...
Mir geht es nur darum, dass 2 bis 3 VMs bei Ausfall einer Node weiterlaufen. Auf Basis der letzten Replikation und am Besten automatisch.
Gibt es eine Möglichkeit das einzurichten?
Wenn Du wirkliches HA möchtest geht nichts ohne shared Storage. Bei ZFS Replikation hast Du immer das Delta zwischen den Syncs und im worst Case musst Du eben per Hand die VM/LXC-Config manuell verschieben.
Welcome, @Skamanda
Sounds it can be the reason. Are these bridges in the same network, by chance?
What are the networks' settings?
Especially these bridges.
No, migrating between nodes and back should work. For the resetup migrating the guests off the node, removing it from the cluster and reinstalling is propably the best course of action...
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...
There will be few workloads that will be able to stress out your NVMe lanes. Unless you have that kind of workload, I would suggest 2x6 RAIDZ2 if you only have 1 server - it is safer to have less disks in a VDEV, it is still plenty fast. Don’t...
Nope. 2 port onboard Intel x710 and a 4 port Intel x710 NIC. I can try disabling RDMA and see what happens, but I don't have the other symptoms you mention. Everything had been working fine for like a year with this HW config, up to a couple...
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...
Without double checking our logs the fact that the Feb 11 log starts with the 2/3 backup implies to me the others were removed before that task run?
Is there any sync job? Or retention set on the backup job?
Normally retention is additive...14...