I can see why #1 would happen, and that is acceptable. However, #2 could be a problem. Let's explore that scenario.
Node A is HA primary and replicating to B.
Replication hangs, crashes, or whatever and nobody notices for days. The data on B gets very old.
Node A crashes.
Node B becomes HA...
I am using PVE 8. I have three nodes, A, B, and C. Guest VMs on A were being replicated every 15 minutes to B. Node A failed.
Per the documentation here: https://pve.proxmox.com/wiki/Storage_Replication
To recover...
move both guest configuration files form the origin node A to node B:
# mv...
We recently enabled AD authentication on our PVE cluster. It seems to work fine, as we are now able to login and maintain the Proxmox servers using our AD domain credentials. However, one thing that does not work is the shell.
When we click the shell icon, we receive an error like:
Connection...
Trying to upgrade from PVE 7.1.4 to 7.4. Getting the following error:
root@vmhost51b:~# apt-get update
Hit:1 http://ftp.us.debian.org/debian bullseye InRelease
Hit:2 http://ftp.us.debian.org/debian bullseye-updates InRelease
Hit:3 https://updates.atomicorp.com/channels/atomic/debian...
Hi Stoiko,
Thanks for the feedback. When I booted the Windows Server 2022 guest after shrinking the disk, I did not get any error messages about a missing GPT table, so hopefully that is a good sign?
Speaking as a new ProxMox fan, I wish you guys would enable the shrink option through the GUI...
The ProxMox CLI says, "shrinking disks is not supported." I did it manually and it seems to have worked fine, but maybe I made a mistake and I just don't realize it.
I have PVE 8.0.3 running a Windows Server 2022 guest with a 2 TB drive 0 and a 6 TB drive 1. Drive 0 is partitioned as a 50 GB...
I only had console access to the server, so I could not copy and paste the output. I also didn't want to type the whole thing out by hand to post it here, so I took a picture of it and ran it through an online OCR process. The OCR mistook a lot of the zeroes for "ohs" and I had to change them...
Thank you for pointing that out. I thought the destination was zpool0, which has plenty of space. The guest has 2 disks. One of them is 6TB and that one migrates fine because the destination is zpool0. The OS disk is the problem because it is trying to go to local-lvm.
That said, I recall that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.