Can you disable the HA resource, wait ~10 minutes until all LRMs are idle, and then do the following please? With no active LRM, the nodes won't fence.
1. get pvecm status while all nodes are up and working
2. disconnect one of the nodes
3. get...
not great, because the Ceph Cluster network is only used for the replication between the OSDs. Everything else, also IO from the guests to their virtual disks, is going via the Ceph Public network. Which is probably why you see the rather meh...
Does the Ceph network actually provide 10Gbit? Check with ethtool if they autonegiotated to the expected speed, and if, run some iperf tests between the nodes to see how much bandwidth it can provide.
Are both, Ceph Public and Ceph Cluster...
Hmm, can you post the output of pveversion -v on the host where you have/had guest 130? Ideally inside of tags (or use the formatting options in the buttons above (</> for a code block)
Is guest 130 a VM or a CT?
okay. that is curious. are all guests powered on or are some powered off?
For example, guest VMID 130 in that error message from the first post. Was it on or off at that time?
Weils im englischen Forum auch gerade vorkam, hier meine Antwort dort mit ein paar Details bez. des jetzt einfacheren Pinnings: https://forum.proxmox.com/threads/network-drops-on-boot.65210/#post-793255
Since the Proxmox VE 9 release, and I think in the very latest 8.4, there is now the pve-network-interface-pinning tool. This makes it a lot easier to ping NICs to a specific name. And you can even choose a more fitting name. For example enphys0...
Hey, are all nodes running on Proxmox VE 9 by now?
If so, do you see files for all guests (VMs and CTs) on all hosts in the /var/lib/rrdcached/db/pve-vm-9.0 directory?
Interesting, even though you set a size/min_size of 2/2, (better would be 3/2, but needs more space), many PGs currently only have on replica o_O.
All affected PGs want to be on OSD 5 with one replica, but apparently can't.
Have you tried...
not that I am aware of, but others might know more. Ideally you could contribute an integration with your DNS provider upstream to acme.sh. Then it will also be available in Proxmox VE.
Cool that it worked!
Do rename the RBD image though to reflect the new VMID in the name! Otherwise this could have unintended side effects as Proxmox VE uses the name to decide to which guest a disk image belongs to! Worst case, you delete the...
Hello, thanks for your reply. When I made this container 4-5 years ago, I did not expect to increase the mountpoint size this much (neither needing to migrate it one day, :) ), I need moving it to a VM. I thought that moving to a VM involved...
puh, overall, I would consider if switching everything over to a VM might not be the better option. Because then moving a disk image between any storage while the VM is running is no problem at all.
LXC containers cannot be live migrated to other...
What I do in some personal infra is the following:
2x PVE nodes with local ZFS storage (same name)
1x PBS + PVE side by side bare metal.
The 2x PVE nodes are clustered. To be able to use HA I make sure that the VMs all have the Replication...
Memory accounting gets complicated very quickly once you peek a bit behind the curtains.
First of, a VM doesn't necessarily use all the memory right after boot. Linux VMs that don't need most of their memory for example. With the new line in the...