OK, so I managed to get into the graphical part of the installation, but now I am facing severe issues of the VM having a hard time to talk to its HDD and the CDrom with the installation:
See bellow:
What I know to work :
So far I got...
@Skye0 thanks for the update. I'm glad you managed it :)
Would you mind sharing your scripts here? Your solution would surely be useful for other readers, too.
Today I switched to the dual-API strategy and was able to finish my project via that strategy
So now I can get healthchecks notifications if backups go stale.
Weil ja eigentlich ProxmoxVE ja historisch eigentlich nicht darauf ausgelegt ist, dass mehrere Cluster den gleichen shared Storage nutzen. Die implizite Grundannahme war ja immer 1 Cluster=1 shared storage. Wirklich ein Thema ist das erst mit den...
“Status stopped” means that the task is not running. There is no rating at this point. A rating appears after the “:”.
Status stopped - OK
Status stopped - interrupted by signal
Status running
You could also say “successfully interrupted by...
Disclaimer: ich betreibe nichts entsprechendes.
Die koordinierende Instanz innerhalb eines Clusters ist die PVE-middleware, nicht etwa das Target. Ein Cluster weiß aber nichts von dem anderen. Und da hilft vermutlich auch kein PDM. Der ist in...
They will forbid to possess a hammer as it can be used by teens and you can kill people with it. Oh, wait..., to sell/own weapons is ok.
This is completely off-topic in this forum - no further reply necessary... :-(
Well... that's not helpful.
To get some helpful hints you need to describe your setup, tell us exactly what you a) wanted to do, b) what you actually did and c) which error message you got.
If you feel it is a bug it is recommended to go...
Guten Morgen, Proxmox-Kollegen.
Ich beginne heute mit einer Test-LUN im neuen Pool B, den der Kunden-Admin bereits angelegt hat:
Overcommit: enabled
Pool Overcommitted: False
Ich hab darin mal ein Test-Volume mit 200GB erstellt, und nach etwas...
Since all your VM disks are stored on shared NFS, the VMs themselves are still intact — only their configuration is tied to the lost node.
Instead of recreating each VM manually, you can simply move the VM configuration files from the lost node...
That's a absolutely legit usecase for a qdevice then since we are now talking about a stretched cluster instead of two seperate ones
;) A setup like this is described on https://pve.proxmox.com/wiki/Stretch_Cluster (although it's motly about Ceph...
First check: pvecm status as you need to have Quorum to start VMs - because a backup does start all "turned-off"-VMs in a "pause"-state. (I am not sure for running VMs...)
That's not a symptom I can confirm.
You should post the actual...
Man sichere vorher (mindestens) die "storage.cfg" und vor allem den Encryption-Key, sonst hat man möglicherweise Startschwierigkeiten beim neu aufbauen...
The main corosync is connected to each node with one physical nic and the switchport is configured as a normal access port.
The backup corosync is connected to each node with a seperate 2x25G bond. This bond is used for storage and backup...
If you happen do have at least three nodes in a cluster with an odd amount you shouldn't add a qdevice:
I don't see a problem. I would avoid a setup, where you would assign the same IP in both clusters or connect from one network to the other...