We recently uploaded the 7.0 (rc6) kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.17, but 7.0 is now an option.
We plan to use the 7.0 kernel as the new default for the upcoming Proxmox VE 9.2 and...
Well...
Turns out that after a month or so, the server provider just replaced the troublesome storage to a new one, and everything is fine. I've had hooked up the new HBA storage and format it as OCFS2 and everything is ok, at least in the...
We are running a cluster of 8 Proxmox VE nodes with OCFS2 on a SAN LUN since several years.
It works but it is not officially supported. There were some issues even with OCFS2 in recent kernels as the development of OCFS2 seems to have come to a...
Yeah... That was my first thing to check.
I even deisable fstrim.timer to see if this can make some difference. Nothing.
Attached is the list os task on systemctl.
Hi there folks.
We have 3 Proxmox VE 9 nodes with share a HBA Storage. We don't have the storage model, because the provider didn't disclosure it.
The storage has a 6TB LUN in which we have an OCFS2 mounted.
During the day everything is...
Thanks. I was hoping not to do any third party flashing, but unfortunately, I could not find any other way.
Crossflash went smooth and now I have Proxmox running.
Perhaps this can shed some light:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_corosync_over_bonds
Corosync is very strict regards bond interface.
You can follow the instruction in this web site:
https://fohdeesha.com/docs/index.html
Click in PERC Cross Flash and find you RAID controller.
To be honesty, I don't recall what I do at that time.
But I think the Fohdeesha guide will help you...
Hello @Gilberto Ferreira,
I stumbled on your post because I recently got an R620 with H310 and am facing the issue of "No Hardisk Found" with the DMAR DRHD error. There are multiple posts I have found about R620s and H310 having issues but none I...
I know this can be a little annoying, but one can set the VM state do disable, and then do a shutdown in cli inside the VM.
Also, if you hit the shutdown button in the WEB GUI, the vm will remain shutdown, regardless its vm state in the HA stack.
Just to reference, there is OmniOS/Illumos which is a former Solaris clone which has some zfs-over-iscsi with COMSTAR and there's some iscsi-ha.
I never tested it, but could be worth...
I'm not sure about that.
But I read the docs[0] again, and there's some additions that need in PVE 9.
I will check out.
SOLVED after change the line in the interfaces file:
Before:
vmbr0
...
...
post-up /usr/bin/systemctl restart...
Hi... Do I need to something with fabric when upgrade from Proxmox 8 to Proxmox 9.
Everything was fine, but after upgrade to PVE 9, the post-up systemctl restart frr.services doesn't work inside /etc/network/interfaces.
It's happens again...
New fresh installation.
After activated frr hungs on boot
I am using this options in daemon for frr
bgpd=yes
ospfd=yes
ospf6d=yes
ripd=no
ripngd=no
isisd=no
pimd=no
pim6d=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
sharpd=no...
Solved after upgrade from PVE 8 to PVE 9
But now I got another issue:
https://forum.proxmox.com/threads/frr-service-doesnt-restart-when-call-ifreload-a-ou-doesnt-start-in-boot-time.180797/
Hi there.
I update a 3 node cluster with ceph from PVE 8 to PVE9.
On the PVE8 nodes it's has frr configuration, to provide mesh network between this 3 nodes.
After upgrade and restart, I notice that the networking hangs.
So I reboot and enter in...
Flow control was active on the NIC but not on the switch.
Enabling flowcontrol for both direction solved the problem:
flowcontrol receive on
flowcontrol send on
Port Send FlowControl Receive FlowControl RxPause TxPause...