Hello,
I've got a PVE cluster running Ceph and found out that a node, left alone without VM, will steadily use more RAM over time:
This is a concern since my total available memory isn't infinite ;)
I can reboot to solve the problem but it suppose that seems a bit Micro$ofty...
I could...
Hello,
I'm a happy user of the 5.3-5 PVE working with a Ceph cluster on 3 nodes.
I'm facing an issue regarding the backup.
Could mounting an OMV appliance in the cluster be an acceptable solution to provide an "external" NFS mounting point for the backup service of PVE?
Using the HA services...
The problem is solved!
Here what I understand:
The meshed interfaces were not correctly identified in the /etc/hosts file. Routing through these interfaces was impossible.
I had not configured the corosync.conf properly. The file did not reflected the network topology.
I've mistakenly created...
Thanks for the help :)
Here is the pvecm
root@srv-pve1:~# pvecm status
Cannot initialize CMAP service
I'll have a look at the documentation to try and fix the current situation.
I've tried to update all the nodes to the latest version on the repos but still can't do live migration. I'm starting to wonder if I should reinstall the cluster from scratch…
There is no HA currently active on the VM nor on the datacenter.
I first thought that this line would be more concerning:
Could not generate persistent MAC address for tap101i0: No such file or directory
Here is the VM Config:
root@srv-pve1:~# less...
I've already pasted the syslog of the target node (srv-pve1). The virtual machine is currently on the srv-pve2.
Here is the syslog of the node on which the virtual host is running:
Oct 25 15:00:45 srv-pve2 pvedaemon[2658454]: <root@pam> starting task...
Yes!!! Indeed there is an error:
Oct 25 14:20:42 srv-pve1 systemd[1]: Started Session 304 of user root.
Oct 25 14:20:42 srv-pve1 systemd[1]: Started Session 305 of user root.
Oct 25 14:20:43 srv-pve1 qm[373076]: start VM 101: UPID:srv-pve1:0005B154:0409021B:5BD1B51B:qmstart:101:root@pam:
Oct 25...
I'm still stuck:(.
I cannot figure why the routing is so unstable.
I've double checked the IPs of the hosts and everything seems to be fine.
Does anyone have an idea for further investigations?
I've created and configured the datacenter.cfg file as requested but it still fails:
TASK ERROR: failed to get ip for node 'srv-pve2' in network '192.168.1.0/24'
It doesn't work either when I'm using the 10.0.0.0/24 CIDR.
I've added the 192.168.1.x addresses in the corosync.conf for the ring...
Hello. Here are the nodes #2 & #3:
root@srv-pve2:~# cat /etc/hosts
127.0.0.1 lopvecm create mycluster -bindnet0_addr 10.1.0.1 -ring0_addr serveur1.vpncalhost.localdomain localhost
192.168.1.102 srv-pve2.mydomain.local srv-pve2 pvelocalhost
# The following lines are desirable for IPv6 capable...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.