Guys am I idiot? I hade 7 years exeperiences with Proxmox (from v2) but I`m still failing in simply case.
I`v upgraded 2 nodes cluster (v4).
I made 3rd node with fresh install v6. Next I upgraded 2 v4 nodes to v5.
Node 2 was upgraded to v6 and joind to node 3.
Node 1 was upgraded to v6 but I...
I have 2 nodes cluster.
You can see 2 lvms in pve3, pve2 has only local (lvm-thick). Can I migrate machine from pve3 to pve2?
I tried it but:
Ok I Tried rename lvm in pve2 to vmdata and then...
Is there any chance to migrate vm between nodes and lvm thin vs lvm?
and
OK i changed hostname, restarted machine and now I have only problem with quorum.
I had to reduce quorum and create /var/lib/corosync dir.
Now it seems new cluster working properly.
Edit: I have to change sshd config (debian 10 don`t use obsolete certifications) and then updatecerts via pvecm...
I have one cluster in 2 nodes (PVE5). Now I add 3rd node in PVE6 and make new cluster. I`ve upgrade node2 do pve6 and 1st cluster was off (because corosync3). I tried to run this https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_separate_node_without_reinstall in node 2 and add it to new...
The best way is if router/gw(dhcp server) in your network support static dns entries (e.g. every Mikrotik) + you can make harpin NAT.
Of course you have to use DNS of your router in your pc.
I have 2 nodes cluster and I want to upgrade it from 4 to 6 (remotely).
I`v migrated (via backup,&restore and 3rd NFS storage) all VM and upgrade node1 from 4 to 5 and later to 6.
Now I`d like move all VMs from node2 to node1 but several machines are in 'testing" and I have to move its offline...
No I resolved this problem with another switch disconnecting from LAN and Corosync is set in separated subnet via this switch. Cluster is stable over one week.
I suppose the main problem of new Corosync are microgaps (short-time latency up) in the switch/network because I have 10Gb network with...
Confirmed. Little bottleneck in the network destroy corosync connection. I used separate core switch (the same as before) isolated from the rest network (physically) and corosync is again stable like 2.x version.
I think maybe vlan in the same switch can help too.
I`m using 10Gb UBNT switches in...
I already tried everything...set totem time to 10k, set 2 rings for every node, automatic corosync restart in 12h period...and I`m still waiting for the miracle.
The corosync maner is totally illogically.
E.g.
My node1 don`t want to connect to cluster in this morning. Restart corosync didin`t...
Reason is simple...corosync randomly lost connection and ddidn`t make a reconnection. Solution is in the future. We must wait.
Edit: My 2rings setup is not the right way..cluster disconnected again.
That`s not true...sometimes you have to restart disconnected nodes, sometimes connected.
I already have conf with 2 rings in every node and I`m waiting for 1st disconnect.
I`m really frustrate frrom updating to Corosync 3. Cluster has been unstable for 2 months.
I have 5 nodes cluster...2 nodes are in latest 5 version, 3 nodes are in latest 6 version.
Nodes are disconnected randomly, cluster is online sometimes few hours, few minutes or few days. Yesterday pve1...
I think this will be the best way, I have only problems with thinpool, thick is better way. I`l try to change thin to thick.
Edit: I`ve made LVM and node is working correctly now.
I have DRBD in 2 nodes in 5 nodes cluster and I`d like share it via NFS.
I did it a few months ago and it is working corretlly.
I was only test array so I forgot for it and today I wanted make some tests and I can see drbd has only 90GB, not 1.7TB.
Ok I tried to redig my config and my cluster...
Ok but why can I mount this thinpool from running machine?
This is boot to 5
root@pve1:~# pvs -a
PV VG Fmt Attr PSize PFree
/dev/sda2 --- 0 0
/dev/sda3 pve lvm2 a-- <3.64t <37.03g
root@pve1:~# vgs -a
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.