When using cloud-init with a Debian 10 VM, there seems to be a bug or glitch in the sudoers file:
>>> /etc/sudoers.d/90-cloud-init-users: syntax error near line 11 <<<
Opening it with visudo looks like the attachment. If I remove line 11, it's fine. Just wondering where that file comes from...
Is there a way to set policy based routing rules via cloud-init if a VM is using multiple interfaces? Usually it'd involve something like this in /etc/network/interfaces or some such file in /etc/network/interfaces.d/:
iface eth1 inet static
address 10.10.0.10
netmask 255.255.255.0...
When you clone a VM, does proxmox artificially limit disk IO during the clone as to not overwork the storage & affect other VMs? I was under the impression that backups did this, so I assumed clones did too. However, I saw an issue yesterday during a clone where several VMs (that are busier...
I have a 2 node Proxmox 3.4 cluster (not using HA or fencing). One is disconnected from the primary network due to a failed switch. I can get into it via the storage network. I need to manually migrate a few VMs to the active node. I planned to do this by shutting down the VM on the...
I haven't done extensive testing on this because my cluster is in production. I have a 3 node cluster. One node is using OMSA 8.4, and the other two are still on 7.4. I haven't upgraded the other two due to some issues with 8.4 (commands showing weird output that shouldn't be there, idrac...
Resurrecting an old thread. Has there been any testing done with OMSA 8? It's changed a bit... for example, the idracadm command is no longer there (intentionally).
My cluster is fully upgraded now so I can't reproduce. My experience was exactly like above stefws said. I migrated off node1 to node2 (both were on the prior version), upgraded node1 & rebooted, then could not migrate from node2 back to node1. The errors I saw were exactly similar to what he...
I have a 3 node cluster & all nodes were on the latest version of Proxmox v4. Last night got an email that upgrades were available in the Enterprise repo. I live migrated all VM's off of one node, dist-upgraded, rebooted, then tried to live-migrate back & it would not work. I had to shut down...
OK I didn't see THAT part in the wiki. Added that file & now the output looks as it should:
root@node3:~# /opt/dell/srvadmin/bin/idracadm7 getsysinfo -w
Watchdog Information:
Recovery Action = Power Cycle
Present countdown value = 9 seconds
Initial countdown value = 10 seconds...
As usual, I missed one critical step in the wiki, setting /etc/default/pve-ha-manager:
# select watchdog module (default is softdog)
WATCHDOG_MODULE=ipmi_watchdog
I set that & rebooted, now I see:
root@node3:~# /opt/dell/srvadmin/bin/idracadm7 getsysinfo -w
Watchdog Information:
Recovery...
I just tried it on one node & after a reboot & see this:
# /opt/dell/srvadmin/bin/idracadm7 getsysinfo -w
Watchdog Information:
Recovery Action = None
Present countdown value = 475 seconds
Initial countdown value = 480 seconds
I'm trying this now. Any update from your end?
To clarify, all I've tried is running the command:
/opt/dell/srvadmin/sbin/dcecfg command=removepopalias aliasname=dcifru
then rebooting.
Here's the latest log from a bulk migration: http://pastie.org/private/9xeqmgxyl7k5qmcbvxl8pw. VM 100 is the only one that would not migrate. It migrated fine once I ran it again for just that one VM.
Sometimes I want to bulk migrate my VMs from the command line rather than using the web interface. To do this, I just run a for loop like so:
for vm in $(qm list | awk '{print $1}' | grep -Eo '[0-9]{1,3}'); do qm migrate $vm node2 --online; done
Sometimes this works great, sometimes it...
All these got installed as dependencies:
# dpkg -l | grep srvadmin
ii srvadmin-base 7.4.0 amd64 Meta package for installing the Server Agent
ii srvadmin-deng 7.4.0-1 amd64 Dell OpenManage Data...
I didn't install everything, just the basic CLI stuff to get OMSA working. Basically just did this:
apt-get install srvadmin-idrac7 srvadmin-storageservices srvadmin-base ipmitool srvadmin-omcommon
Yes, using 7.4.0-1. My config file was overwritten on all 3 servers.
I'm trying to set up our cluster of 3 Dell R730's & I'm following the wiki here: https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x#Dell_IDrac_.28module_.22ipmi_watchdog.22.29
I do have OMSA installed so I edited out dcwddy64.ini:
cat...
I have a Proxmox v4 cluster set up on my Macbook Pro using VMware Fusion. I notice it doesn't support the watchdog timer:
root@pxcluster1:~# dmesg | grep watch
[ 0.076175] NMI watchdog: disabled (cpu0): hardware events not enabled
[ 0.076176] NMI watchdog: Shutting down hard lockup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.