I got this exact situation 3 times in the past on my @home cluster. I have 2 LANs, one for the public network (including backup and corosync) and one used by Ceph.
Since I've added a second corosync ring on the Ceph LAN it never happened again. And in my case, it was clearly backups causing...
Just backup your template with the builtin tool, transfer the .lzo file to the dump directory of the storage dedicated to the second node (il not shared by both nodes) and restore it from the second node.
Your template will then be available on the second node too.
Thanks fir your answer Dominik,
What if I just change "Request State" from "started" to "ignored" on all VMs ? If it does the trick (all VMs are flagged "start as boot" so they'll start on their last node when power will come back) how to change the state in bulk mode ?
# sed -i...
Hello,
I have a 3 nodes PM5.2 cluster running about 20 HA VMs (CEPH storage).
Tomorrow, I have a planed power outage for about 4 hours and my UPS won't be able to last that long.
Two nodes are running apsupsd (one server and one client) and one node nut (upsmon).
I'd like to "suspend" the HA...
Thanks !
You're right, looks like those disks are remains of my previous interrupted tests when missing the "storage location" while creating CTs.
Have a nice day.
Hello,
I'm using PM on a 3 nodes cluster@home. So far only with VMs but I wanted to test containers.
I've created a simple deb9 container on shared (NFS) storage but when I want to migrate it, it tried to migrate the disk to local storage first ...
Container definition...
Got the same issue last week with a 3 HA VMs backup schedule. The 3 were "error & locked", GUI was showing question marks for nodes and VMs. And same as you, no log for those backups, only .tmp (including a .conf file) directory and empty .dat in my case.
I suspected my NFS server, where the...
Correct. PID stands for process ID.
And in this case, 3798 is the PID of the process (kvm) handling my 101/vmaccess1 VM.
The first command (ps +options) is used to display processes running on the host. The second (qm) is the Qemu management tool. Both outputs are passed to awk, which is a...
I guess VM's PID is the PID of the KVM process handling a specific VM. You can get it using ps or qm:
VM ID:
root@pve3:~# ps -ef | awk '/id 101/ && ! /awk \// { print $2 }'
3798
root@pve3:~# qm list | awk '$1 == "101" { print $NF }'
3798
VM NAME:
root@pve3:~# ps -ef | awk '/name vmaccess1/ &&...
You could install samba package on your ProxMox node and share the drive for your VMs. But not sure it is supported.
Best solution, for me, is to virtualize your samba server. In a simple container running only the samba server for instance or using a VM based on a NAS oriented distribution...
Hello,
Select your VM, in Hardware TAB select the disk you want to move and click on "Move disk" button. Select your LVM Target storage and click apply. This can be done online and you can chose to keep (default) or delete the source disk.
I'd propose you RDP as it's the native Windows protocol for remote desktop, followed by VNC (chose your flavor based on your environement in TightVNC, UltraVNC, WhateverVNC ...)
Are you trying to connect as root ? Have you tried to enter your password in the "user name:" box in order to validate you're entering the right one (maybe a wrong keyboard/language settinh ...)
I'm still waiting for the snapshot enable flag at vdisk level like existing "no backup" or "skip replication ones too:
https://forum.proxmox.com/threads/add-snapshot-0-option-to-hard-disk.43780/#post-209795
Hello,
1.- Redundancy is *not* backup. If you modify/delete/corrupt something R1 won't help you.
2.- If a drive fails in a R1 metadevice, it becomes degraded but can still operates, hopefully. You'll then have to identify the failed disk and replace it. Then start the resync of the metadevice...
Select the VM -> Backup -> Backup now (wherever storage shared by the single node and the clustered nodes).
Once it's done for all VMs, remove them all, join the node to the cluster. Then select the shared storage from your newly added host's view -> Content -> Select the backup file -> Restore.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.