Since the Proxmox VE 9 release, and I think in the very latest 8.4, there is now the pve-network-interface-pinning tool. This makes it a lot easier to ping NICs to a specific name. And you can even choose a more fitting name. For example enphys0...
Hey, are all nodes running on Proxmox VE 9 by now?
If so, do you see files for all guests (VMs and CTs) on all hosts in the /var/lib/rrdcached/db/pve-vm-9.0 directory?
Interesting, even though you set a size/min_size of 2/2, (better would be 3/2, but needs more space), many PGs currently only have on replica o_O.
All affected PGs want to be on OSD 5 with one replica, but apparently can't.
Have you tried...
not that I am aware of, but others might know more. Ideally you could contribute an integration with your DNS provider upstream to acme.sh. Then it will also be available in Proxmox VE.
Cool that it worked!
Do rename the RBD image though to reflect the new VMID in the name! Otherwise this could have unintended side effects as Proxmox VE uses the name to decide to which guest a disk image belongs to! Worst case, you delete the...
Hello, thanks for your reply. When I made this container 4-5 years ago, I did not expect to increase the mountpoint size this much (neither needing to migrate it one day, :) ), I need moving it to a VM. I thought that moving to a VM involved...
puh, overall, I would consider if switching everything over to a VM might not be the better option. Because then moving a disk image between any storage while the VM is running is no problem at all.
LXC containers cannot be live migrated to other...
What I do in some personal infra is the following:
2x PVE nodes with local ZFS storage (same name)
1x PBS + PVE side by side bare metal.
The 2x PVE nodes are clustered. To be able to use HA I make sure that the VMs all have the Replication...
Memory accounting gets complicated very quickly once you peek a bit behind the curtains.
First of, a VM doesn't necessarily use all the memory right after boot. Linux VMs that don't need most of their memory for example. With the new line in the...
Das ist bei alten Industrieanlagen, die viele hunderttausende bis millionen €kosten,n mitunter nicht so einfach ;)
Neben dem Hinweis von @Falk R. bez. Treibern, die virtuelle HW so alt wie möglich für den Anfang:
Machine type: i440
Disk als...
Ich vermute, mit den bisschen infos die ich habe, dass da noch ein Check des LVMs oder Dateisystem am laufen war. Der letzte kleinere Screenshot sieht zumindest sehr danach aus.
This is regarding the VM summary panel on Proxmox VE. In your screenshots you have "Memory Usage" with the line chart and one line below that is the "Host memory usage". Those values are the same in both screenshots → Proxmox VE did not get any...
Sieh okay aus. Wenn du kein Ceph verwendest, ob jetzt als HCI Server oder um den Host zu einem externen Ceph Cluster zu verbinden, brauchst du die Ceph Repositories auch nicht unbedingt. Ansonsten einfach noch das letzte Ceph no-subscription Repo...
hmm, yeah, VLAN 1 is the default "no vlan configured" one. If you only have the vmbr0 and assign the VLAN tag in the VMs virtual NIC, any packet leaving the node should be tagged accordingly.
Maybe run some tcpdump on vmbr0, or the underlying...
Ah okay. I understood that they want to see which VMs used to be on that node. In that case, I think checking task logs or syslogs for recovery tasks would work somewhat.
Or you fetch the cluster/resources every minute or two, and can then diff...
What do you want to achieve? Some VMs that should be placed in VLAN 3? The host should not be placed in any VLAN?
Then, on the port that you connect your PVE host to the switch, 3 should be a tagged VLAN.
If your switch supports it, configure an...