Is it possible to have a per-VM shutdown policy that can overwrite the default one?
I want most VMs to migrate, but there are some VMs that are already highly available within the software and I would prefer they shut down along with the node.
I wasn't able to find this option anywhere, but I...
I am imagining an NVR VM which can be live migrated between hosts and can pick up vGPUs from the new host as easily as VCPUs.
In this current state, would this be possible?
SOLVED: I did not choose the Cluster network when joining the cluster.
Probably should not allow one to continue to join the cluster without making the choice, but it's my fault.
Okay I was able to remove vmhost3 from the cluster using the following instructions:
systemctl stop pve-cluster corosync
pmxcfs -l
rm /etc/corosync/*
rm /etc/pve/corosync.conf
killall pmxcfs
systemctl start pve-cluster
The original cluster is now working again... I will try and remove all...
I had a nicely working 2 node cluster (vmhost1 and vmhost2)
I installed a 3rd node (vmhost3) and used the join information to add the node, but the cluster is broken:
vmhost2 syslog:
Jun 01 07:48:21 vmhost2 pvescheduler[2452666]: replication: cfs-lock 'file-replication_cfg' error: got lock...
The safest way is to create a VPN and allow access to the subnet that your Proxmox hosts lie on.
OpenVPN Appliance is free for up to 2 concurrent users and is very easy to set up in a Virtual Machine in Proxmox if you so desire. Of course, if this user needs to restart Proxmox host then the...
Hmm I've never tried to install Windows 10 as a VM. What drivers specifically is it complaining about? If you're installing to a virtiodisk then you would need virtio drivers. If you're using virtio network card then you need the NetKVM.
You can just point whatever drivers are required...
An interesting thought is that if you guys add an "Update" button which essentially does apt-get update && apt-get upgrade && apt-get dist-upgrade, with a notice to restart, would that solve a lot of problems for people who don't know and your own staff who continually has to answer this same...
You can only shrink raw files.
Backup by copying the disk (if you have space...)
Make sure your OS partition is not any larger than the size you're shrinking to. In your case it shouldn't be.
If you use qcow2, first you have to convert:
qemu-img convert vm-100.qcow2 vm-100.raw
Then...
I was able to do it, though I don't fully remember what I did cause it was a stressful situation. I did have an issue at first where I had two nodes showing up in GUI, one with old hostname, one with newhostname but without anything working.... then I went through and changed everything I could...
Thanks for all the help so far.
I'm planning on changing the iSCSI host's IP as well.
Is the recommended action to change the IP address in /etc/pve/storage.cfg?
I also saw this thread with someone who successfully removed and re-added the iSCSI target and LVM group via GUI.
Hey wolfgang, thank you so much for the assistance.
I do have one question.
When I check the corosync.conf file, I notice an IP address which I have not assigned myself, from what I recall:
totem {
cluster_name: Cluster1
config_version: 6
ip_version: ipv4
secauth: on
version: 2...
I'd like to change the IP addresses of our two notes (2 node cluster).
I wanna make sure I'm not missing anything critical.
Is it as simple as changing each node's Network Settings via the GUI and rebooting?
For some reason this just started happening to me as well since a few days ago. I didn't update Proxmox at all, so I can't imagine it's a proxmox issue.
TASK ERROR: command '/bin/nc6 -l -p 5900 -w 10 -e '/usr/bin/ssh -T -o BatchMode=yes <ip address> /usr/sbin/qm vncproxy 303 2>/dev/null''...
Well I somewhat successfully resized the logical volume by using
lv -resize -L 25G /dev/VMStorageGroup/vm-###-disk-1
Within Proxmox, it shows a 25G, and also within the VM it shows the same.
The only odd thing is that if I use systemrescuecd and open Gparted, I can still see the original...
I'm running about the latest install, though I don't think it's pertinent to this question.
There's an iSCSI share (from FreeNAS) that is mounted to a datacenter and shared in a cluster of two hosts. I created all VM disks on this LVM.
pvdisplay shows:
--- Physical volume ---
PV Name...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.