Modifying /usr/share/perl5/PVE/LXC/Setup/Debian.pm is not needed, installing the last upgrade for pve-containers is enough, but having someting more explicit that "script exit with status 25" would have been better. I had to strace the command to get what was the error, and only after that i got...
That one seems create by systemd-run that launch KVM. But at least for me after a qm shutdown the VMID.scope was still active (no results on the log, the command was reported OK) but then every start will give the misleading Input/output error message.
The command systemctl stop VMID.scope...
I got the same exact problem. But qm stop VMID doesn't work. The problem was that the VMID.scope unit created by systemd-run to launch was still there and taken as active (reported as such by systemctl status).
To solve the problem you have to issue the command systemctl stop VMID.scope, then...
It took me some time because I could not use anymore all 4 blades, and I got to start using the new hardware. So now a single blade is running standalone with some VM and I'm using the other 3 blades to test the cluster. I carefully removed and purged all packages, and cleaned all directories...
I checked everything about. I also had multicast problem. But now both are solved, fencing is working (tested with fence_ipmilan), multicast is working (tested with ssmpingd and asmping), but still cluster is not working.
If a restart a node (rebooting it or restarting cman) most od the time I...
I want to stop (in the night) because I have a requirement to have a clean and consistent backup, all database files synced, so I can have a clean restart from a precise time point, without needing any kind of recovery. For what I understand about vzump in snapshot mode it takes an image of a...
Hi, I had a working cluster configuration with Proxmox 2.0. I had all four nodes running, online and with rgmanager active.
After upgrading a node to 2.1 the cluster stopped working. So I upgraded all the node, but the maximum I could get was having all nodes Online but rgmanager was never...
My idea of HA is to have the VM/CT always running when I want that active. But I'd like to be able to stop a machine when I want it to be inactive, not having it restarted in another node by the HA system. That's too much an HA.
Does this means that to do this I have to remove the VM from HA...
Ok, I'll avoid it, but needing to backup a virtual machine in a stopped state what I must do?
Its wise to stop it, made the backup (with a valid mode) and then restart it? That will be fine for me, until it do no trigger a migration.
Simone
Hi, I just finished installing and configure Proxmox 2.1 (by the way many thanks for your wonderful work) on a cluster of 4 Fujitsu blades. Cluster is UP, all 4 nodes seems OK:
root@lama9:~# pvecm nodes
Node Sts Inc Joined Name
1 M 40 2012-10-20 11:19:27...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.