Well, I tried to set same numerical prority in all nodes and then restarted one node.
No changes, all migrated VMs went to the same node :-(
Don't know if this is normal behavior or I might be missing some point
Hi! PVE 7.1 here with HA Cluster and CEPH storage
I have 3 nodes running 3 VMs each one, all nodes same priority. Migration works great.
HA policy is to migrate when a node goes maintenance mode, and its doing fine except it's moving all VMs to the same node
So when I reboot a node, I go from...
I was looking elsewhere, didn't see that.
In fact, problems started after (failed) update, so it seems kernel was not updated but some other packages did :-/
Do you think a dist-upgrade would fix this?
I have a problem with a proxmox server that shows all LXC containers stopped.
pve-manager/5.3-5/97ae681d (running kernel: 4.4.6-1-pve)
Containers are online and working, but all pct commands fail:
Failed to load config for 108
Failure to retrieve information on...
And one year later, I'm facing the same problem!
It happens with centos7 template, but not with centos6 template.
I downloaded a centos7 alternative template from https://us.images.linuxcontainers.org/ and got the same result.
Can't find what's wrong :-(
Time ago we used FSCKFIX=yes in /etc/default/rcS to automatically repair defect filesystems during boot. (https://manpages.debian.org/stretch/initscripts/rcS.5.en.html)
Later, Debian moved to systemd so we started using systemd-fsck@.service to force repair on check...
Hi! After last OS upgrade, I need to run "zpool upgrade" on my pools
I'm not sure how that upgrade works on pools, so questions are:
Is it safe to make it "online"?
Should I stop virtual machines running on those zfs pools?
Should I stop any other service?
Or should I run "zpool upgrade"...
I have a single dedicated server with a single network interface.
I'm using proxmox 4.2 on it, fresh install.
This server is using a bridge, vmbr0, which uses the only available physical interface, eth0
I need to configure a second network in the same interface, plus using a specific...
Mir, NAS is managed, I can't put my hands on it, it's external, OVH just let me see my partition, so I cannot log into the NAS neither see its configuration.
That's why I need to guest what is happening to my CT
After more than 30 minutes "starting" the CT, I see this processes running on it:
# vzctl exec 102 ps ax PID TTY STAT TIME COMMAND
1 ? Ds 0:00 init 
2 ? S 0:00 [kthreadd/102]
3 ? S 0:00 [khelper/102]...
This is the startup log:
# vzctl --verbose start 102Starting container ...
Container is mounted
Adding IP address(es): 192.168.0.102
Running container script: /etc/vz/dists/scripts/redhat-add_ip.sh
/bin/cp: preserving permissions for `/etc/sysconfig/network.4': Operation not supported