Since latest kernel / version of PVE i got rid of some blocking issues i had.
Basically i was unable to upgrade PVE beyond a specific kernel version where when i did upgrade i was facing kernel-panic's as soon as i launched a VM/CT on it.
This issue seems to be solved again with latest 6.3...
Just upgraded a 4-node cluster with shared SAS storage where volumes are offered to the nodes as GFS2 mountpoints - flawless since i wasnt able to upgrade kernel past some point as it would face a kernel panic ( see other post of mine ) - with the release of the 6.3.x version all is solved...
Update on the issue : Since i updated to 6.3.x the issue is no longer present :)
Good Job PM developers !
@matrix as you have not responded on my reply i posted back like 1.5 months ago i will assume you were unable to motivate your suggested action, which if i would have followed your course...
@matrix Motivate that ? .. i mean booting into a selected kernel even tho old kernels are present wont solve the issues i am having.
Next to that i am not removing older kernels from which i am sure makes my setup work
i have enabled :
kernel.core_pattern = /var/crash/core.%t.%p
kernel.panic =...
Yes, it is correct.
i am using lvmlockd in dlm for lockspace on the availabillity of volume( - groups and presented volumes)
then gfs2 as filesystem is used to ensure locking is done at the file-level.
It was just my favor for lvm that i started out with this .. never really looked into the...
Windows 10 can even use SSH capabilities ....
However as you have not provided a version of windows i cannot determine if that would be an option for you.
Still if this were to be an option i would look at this and then write your script via this way ...
@matrix as written above the 'latest' kernel is installed on system but if i boot from that we get the explained kernel panic.
If i however on my boot screen select the previous ( 5.4.44-2 kernel) then i'm running just fine.
pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.44-2-pve)...
Dear all,
i am experiencing kernel panics since releases of kernel above version 5.4.44-pve2
Behaviour :
- update all packages ( including kernel)
- restart node
- node comes up fine/ communicates with cluster (4-node)
- node will kernel-panic after it starts either an LXC or VM guest
- issue...
1. as said this was a research-project trying to bend behaviour to my needs, fencing gave alot of issues, so i turned it off, and never looked back to be honest.
2. i never had a full cluster/network fallout, so i have not reproduced this behaviour.
3. not have had that issue.
4. i am atm...
An update again, as after an update i noticed the manual additions i had made to the lvmlockd unitfile vanished.
This lead to the issue where the node no longer successfully joined up with the other lockspaces.
For this i also added a class to bring it under puppet control (basically the same...
Hi, your sources seem to differ from mine ( do keep in mind that i installed debian 1st and then moved to ProxMox) but for comparing :
cat sources.list
#
# deb cdrom:[Debian GNU/Linux 10.0.0 _Buster_ - Official Multi-architecture amd64/i386 NETINST #1 20190706-12:34]/ buster contrib main...
@oguz i agree, as said i only expressed my standing, so its a personal preference/way to coap with it ...
I Did notice (and loved it) .. you honored the "array starts at 0" :D
I found out why ... and the answer was 2 parts :
a VM had an iso referenced which was not available on the new box
a vm also had a bridge (vmbr1) reference which was not available on the new box
IMHO when importing a VM to a new box this stuff should be detected by the import-process and if...
So in essence what you are saying is that ProxMox allows 'Overbooking' of storage .... which (both on thin and thick provisioning) is a big No for every admin.
My standing on this : Overbooking should _never_ be allowed as it fools the admin unless the admin knows exactly what he/she is doing ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.