Offering raw LVM (or LVM-thin) does not give me the capabillity of pulling snapshots, which in my env is very required (i test customer-cases/issues)
So prior to starting on testing a case from the 'base' product i take a snapshot, then further tune into the situation, test, report back on...
Something i had forgotten to mention in the whole previous is that the directory being offered to Proxmox is not set to shared.
As the GFS2 filesystem takes care of this by itself it is not needed to set the directory to 'shared'
Another Update after a while of time.
I extended my storage, so i now have enough for the future.
One of the things i had to do was to migrate all VM/CT storage devices from the original 'Raw' LVM device i offered to Proxmox to the GFS2 offered volume (as directory).
When i had completed the...
Clusters do require the pre-upgrade on all nodes to corosync 3.x , from what i have experienced on my 4-node cluster following the exact procedure as described in the update docs i had no issues at all.
u could fake the quorable part by setting pvecm expected 1 , this would atleast get your VMS running, but it wont solve the cluster-issues you are facing.
To get DNS resolution for your proxmox UI the solution depends on some conditions :
One management machine over dns/hostname resolution.
Easyest way is to create hosts entries in C:\Windows\System32\drivers\etc\hosts
Add a line to the file in the following format : ip (space or tab) fully...
As stated above Clustering on PVE5 is not compatible with PVE6 due to the way in which corosync version 2 differs from 3, however for migration purposes there is a way :
My approach would be :
moving towards the latest 5.4 version.
check compatibility on all your nodes for migrating towards...
Do add them :
i myself do not use subbed repo :
cat /etc/apt/sources.list.d/pve-install-repo.list
deb http://download.proxmox.com/debian/pve buster pve-no-subscription
Hi,
i am a bit confused, i myself have an MSA2040 unit which has 4 SAS ports , connected to 4 separate servers, and configured for shared access of the LUN's i have configured.
as far as i can derive from your info :
You have 3 server
you have a MSA2000
It is physically connected to ONE...
Hi,
shared lvm directly attatched has its disadvantages, as it will not hold all types/elements required, it only supports diskimages and containers.
Which is why i have been testing a GFS2 shared volume offered to PVE as directory, please look at my experiences so far as i have kept a...
Just an update in regards of the workings :
I have been updating the PVE install (regular software update) on all nodes, without any issues, basically they restarted flawlessly, joined the global lockspace and then mounted the shared LVvolumes.
So i would say the final configuration is...
When i have more time i will rewrite the whole post so it reflects all expeciences and gives a full documentation which (should) reflect a one-shot install without issues , including the setup of the shared LVM volumes.
Ok tested the reinstall of the last node (Node04) and changed a bit in the order in which i performed it.
Just to be sure i always pull the connection to the shared storage to never make a mistake in overwriting so i always perform a local installation !
Basically as soon as i install / switch...
Again i reinstalled another node (node03) whilst changing out the local disks ... install went fine, but again it joined the cluster , but somehow PVE kept complaining on wiating on Quorum, somehow it kept thinking it needed 4 votes, i ended up setting it to 1 on the newly added node ( pvecm...
I have reinstalled Node #1 according to the https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster (after i had pulled out the SAS connection to the MSA2040, to be absolutely sure i would not overwrite anything on the shared storage) docks provided, then set up my dlm, lvmlockd...
i had the reverse a few days before, i killed a LUN on my storage which held the disk of a CT , trying to delete it gave the 'error unable to find storage xxxxx' and the CT remained
what i bluntly did was go into /etc/pve/lxc and delete the *.conf file ... PVE picked it up and removed the CT...
Seems i have solved my 'kernel-panic'ing on Node 04 , i still had the debian kernel available on the system, i removed it, as mentioned in the https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster as optional.
Then restarted it ( ofc a panic came up) and then wanted to record the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.