"PING: transmit failed. General failure." is a very different error than a timeout. This means Windows couldn't even send the packet, the network stack itself is failing locally, before anything hits the wire. This shifts focus to the VM side...
by default all unprivileged containers map to the same host range (typically 100000:65536). The actual isolation relies on multiple kernel layers, not just
UIDs:
1. Mount namespaces: each container has its own filesystem view. A process inside...
Speaking from an enterprise perspective, this feature is something that i know several companies do, with other products than PBS - have it unencrypted locally (for fast backup), and then copy it to a hosting partner, compute in a cloud provider...
Hinsichtlich live-Migration, ja.
Richtig, der ist ja gerade kaputtgegangen.
Naja, wie in #2 schon erwähnt: du verlierst die Daten seit der letzten Replikation.
Aber ja, auf dem überlebenden Node startet HA diese VM neu. Automatisch!
In my case I had two problems:
I had assigned the server port to a unique zone that didn't have the correct routing/firewall settings configured.
The port to the gateway was assigned an Ethernet Port Profile that didn't allow tagged VLANs.
Once...
I admit I'm not very experienced with Proxmox' networking, so at the moment I'm just comparing your config with the similar one at
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_network_bond
- section "Example: Use a bond as the...
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory'...
The result of cat /etc/network/interfaces
would give more details :)
Of course not as a screenshot, but as text in the CODE tags (using this </> button above).
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...
Mir geht es nur darum, dass 2 bis 3 VMs bei Ausfall einer Node weiterlaufen. Auf Basis der letzten Replikation und am Besten automatisch.
Gibt es eine Möglichkeit das einzurichten?
Wenn Du wirkliches HA möchtest geht nichts ohne shared Storage. Bei ZFS Replikation hast Du immer das Delta zwischen den Syncs und im worst Case musst Du eben per Hand die VM/LXC-Config manuell verschieben.
Welcome, @Skamanda
Sounds it can be the reason. Are these bridges in the same network, by chance?
What are the networks' settings?
Especially these bridges.
No, migrating between nodes and back should work. For the resetup migrating the guests off the node, removing it from the cluster and reinstalling is propably the best course of action...
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...