For reference, my setup of the cluster was documented in another Forum post : [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]
So do keep in mind that what i a doing in my playbook(s) is aimed at keeping my (system-)setup coherant.
Next to that, i have one...
To add to that, you will even be able to transfer VM's/LXC's between the managed (cluster-)nodes.
Just be aware that as you do not have shared storage it will mean when you transfer a box it will shutdown, and then gets sent over the wire, and when sent will be available on the other box.
Prior...
@ednxzu
I know its not as TS is after but i do have a set of playbooks to set my ProxMox env to how i want/need it.
Remember, i'm sort of running an exotic env with specific demands, and its absolutely not optimised in regards of tasks, but it works.
Happy to share it tho.
- Glowsome
Wait, what ... first you are asking about 7.2, and now you are asking about 6.2 ?
- you have not yet given any details regarding your first setup ( except the mention you changed something in the '/usr/share/perl5/PVE/API2/OpenId.pm' - file without letting us in on what exactly you have...
When Multipath was introduced (with new servers rotated in) it was configured correctly since start.
On the old servers no Multipath was present and servers only had 1 connection to the storage.
I distinctly remember also seeing same behaviour back then, but as said since i created the post and...
@m
Expectation is to see the correct amount of storage being offered to PVE instead of the now (seemingly) doubled amount displayed.
If you look at the LVM-screenie/view and calculate all space together you get to about 22TB , but was seeing 44TB total displayed on the Dashboard.
However...
In my setup i have a 4-node cluster with shared SAS-Storage configured with Multipath.
When i look at the Datacenter-> Summary report i see alot more storage which is actually available being calculated.
As the usage of 'local' storage (in any way) within a cluster when having shared storage...
As i have not found any means to further debug or resolve this i will be planning on replacing this box.
However that will have quite the impact, as it is running some vital roles - making it quite painfull to implement changes on this box.
in regards of Ansible - and running it actively on a 4-node cluster for management/keeping settings in check i do not really see an issue.
this is excluding the absence of the needs-reboot -detection (as currently not implemented/present in proxomox - specified in reboot required )
As you are...
i had similar flukes on installing new boxes on my cluster due to not having my documentation in order.
Just a suggestion - mark your post/thread as [solved] - topic so others wont look into this as if its still not solved :)
Sorry to (re-)activate this thread, but i am also facing this issue.
As to the above provided solution i am a bit cautioned by introducing wildcards.
My situation :
- LXC console (across the cluster) works fine.
- VM console only works when i am on the GUI of the node the VM is on.
-...
Tonight it even got more weird ... the VM was shutdown after it was touched by the backup-job :
INFO: VM Name: vm-lx-01
INFO: include disk 'scsi0' 'vms01:125/vm-125-disk-0.qcow2' 64G
INFO: include disk 'scsi1' 'vms01:125/vm-125-disk-1.qcow2' 64G
INFO: backup mode: snapshot
INFO: ionice priority...
Well unfortunately no change in the issue, so reinstalling the agent did not solve it :(
Below is a screenie of the agent's consumption of resourses on the VM
FYI:
As far as i have been able to determine it was due to a/the qemu agent just sucking up all runtime....
So as a measure i have re-installed the agent forcefully with :
zypper in -f qemu-guest-agent
Need to determine if this solves anything, but hey, its atleast an action aimed at the...
Extra info, this is not a 'Pure SLES box, its an Open Enterprise Server:
vm-lx-01:~ # cat /etc/novell-release
Open Enterprise Server 2018 (x86_64)
VERSION = 2018.2
PATCHLEVEL = 2
Additional info on the VM ( logged from an SSH-session i still had running :
Message from syslogd@vm-lx-01 at Nov 28 03:11:29 ...
kernel:[39135.014041] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [sshd:25223]
Message from syslogd@vm-lx-01 at Nov 28 03:11:29 ...
i have pfSense (as hardware box / DL360Gen7) router that handles internet ( my ISP router is configured in bridging mode) , so no issues there/ everything is under my control.
Next to that i am running split-DNS with own dns server(s) on the inside as VM's
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.